It’s funny, I’ve almost started to think of the year as having these seasons: “Megalaunch –> EMC World –> Cisco Live - > VMworld SF -> Inter-VMworld integrum (aka breathe) –> VMworld Barcelona…” On other days, I feel like the seasons are: “Q1, Q2, Q3, Q4, Breathe” :-)
…Then I spend a little time with my family and friends and am reminded of the real seasons, and the fact that the world is spinning on it’s axis and rotating around it’s orbit (throw in a 65,000 year precession calendar and our rotation around the Milky way if you’re that type of person :-)
Here we are in Fall (aka the “Inter VMworld interregnum” or “mid Q3” :-) and I thought it would be good to do another “Jobs” post.
Lots of GREAT jobs here, things begging for the right passion, the right skills, the right people. In each case, I’ll point to a contact (remember EMC employee’s email format is “email@example.com”) or a specific requisition number if I have it (and you can go to the EMC “Jobs” site and look up the req).
My advice – don’t be lame and just send an email, or a CV. The person you’re applying for will assume you have a solid LinkedIn profile (and will look it up). Point to a public example of your work, your passion. Point to your GitHub portal. Point to your community activity. Point to your blog. We live in a new social era – don’t apply the way you would a decade ago! STAND OUT!
These span field (SE), tech marketing, IT, and services roles… Personally, I’ve found EMC to be a great place to work, a great place to make a career. It’s not for everyone, but many dig it!
Over the weekend, I saw this blog post about the disruptive XtremIO 2.4 –> 3.0 upgrade.
First of all, yes, it is accurate to call the XtremIO 2.4->3.0 upgrade a disruptive operation. When customers using XtremIO 2.4 migrate to 3.0, there are big changes, big improvements. Think 2x better performance. Think 2-4x higher utilization due to compression.
We continue to support 2.4, so if customers want to sit tight and avoid the upgrade, they are entitled to do exactly that – they will continue to enjoy all the XtremIO awesome they are loving. To get all the new goodies above (at no extra cost!) they will need to pull the data off, upgrade, and then bring the data back – and and our partners are always ready to help them do it.
All of the above is why so many customers are picking XtremIO, why Gartner put it here, and why it has become the fastest growing revenue storage product EVER.
But why is this particular upgrade disruptive? Why do disruptive storage events ever happen anymore?
Storage is persistent. This is patently obvious – but in general people don’t think through what this means (and why should they – they aren’t engineers!).
Anytime you touch two core parts of any persistence architecture: 1) layout structure layer or 2) metadata mapping layer – it means a disruptive migration to some degree.
BTW – it’s funny looking at some of the people who commented on the blog critically… who themselves as vendors are going through a huge disruptive event of their own! All the more reason to not listen to people who go negative, and trust those who disclose warts, and partner with you to work through them.
Disruptive upgrades affect ALL persistence architectures, all vendors at times. If you’re curious about the engineering reasons why (helpful to predict whether any future upgrade of any stack will likely be NDU or DU), as well as more on this particular XtremIO upgrade (and some more roadmap) read on!
Here’s the presentation I used at VMworld 2014 in San Francisco. Every year, there is a “everything including the kitchen sink” session I tend to do, which is a blend of “big topics” (here, it was “what’s going on in Converged Infrastructure – including “Hyperconverged”) and “tactical topics” (what’s up with ScaleIO, Recoverpoint for VM, ViPR, etc).
Download, use, enjoy! BTW – many are looking for the softcopy of the “Converged Infrastructure taxonomy” I used here… It’s in this deck!
It’s an interesting thing that I’ve been seeing more and more – customers looking to leverage commodity off the shelf hardware and opensource software stacks.
Personally, I LOVE this idea. I think that “Software Defined” means that whenever you CAN do something with software on commodity hardware, you should (and have said this publicly for years – see this here: “Where the physical stuff that does work (data plane) can be software on commodity hardware, do it that way.” Even earlier in 2012 here)
While EMC is associated “mentally” with big arrays dressed in “death star” black with kick a$$ Cylon LEDS (and we do indeed do that :-) like VNX and VMAX, we completely embrace the “software on COTS hardware” market – with ViPR and ScaleIO.
What HAS surprised me is how many customers say “I was software + COTS… but I want YOU to provide the COTS hardware, at COTS HW + Software economics and bundle it, and support it as an appliance”.
It’s a little counter-intuitive, because by definition it narrows the hardware (from the whole ecosystem or all of OpenCompute options). However, it makes sense when one thinks of many enterprises who lack the “bare metal as a service” organizational function that the Hyper-Scale folks have. This resulted in things like EMC ECS (more here)
I got a really interesting email from one of my EMC SE colleagues yesterday on this topic (non-edited but censored to protect the innocent):
“I can totally vouch for the ‘Customer Service’ aspect for anyone wanting to run COTS. Perspective might be useful… My last role, I ran Development and Engineering for _________________ (service provider) – we got pushed into a bad place. We ran a private and public cloud service for _____ (customer) internally, and for enterprises in EMEA and APJ. Despite our awesome network, and DC’s, the support function was bulky, inflexible and stuck in the old ways. Add to that a Product function that spent more time in ________ (a city) than where our market was…. Anyway the Product function wanted the lowest cost storage as was possible for the market in Asia – we deployed Supermicro kit, with ________ (a vendor SDS block storage stack) and ______ (a vendor SDS object stack) providing Block and Object. On paper, it was excellent, well designed and it scaled, and worked well with Cloudstack and Swift (we were right in there at the beginning….). On the balance sheet, we came in significantly less on the cost of EMC or NetApp – which with hindsight – I saw the EMC Sales guys do their best impression of a rabbit in headlights, before pitching the wrong kit – they didn’t understand SP’s back then. (Ed: :-) )
Even then ‘Customer Service’ aspect we believed we had covered – the software was robust, designed with the Chaos Monkey in mind, we kept spares onsite in ____ (city in the americas), _______ (city in EMEA) and _______ (city in APJ) – easy, right?
If Ops staff could be consistent in following processes, they’d be engineers… but they weren’t, they never followed process, never marked things, we ran out of spares, chasing SM for replacements, organizing pickups and deliveries, monitoring of the backend ended up with us having DC staff doing a floor walk every 3 hours – the bundled software was so unreliable.
I ended up having ___________ (a vendor SDS block storage stack) hire guys to leave onsite in ________ (city) to keep their eye on operations. It was a management headache, a support nightmare, and despite it being cheap to procure, and 99.999% uptime, we realized the issue was with our ‘success metrics’.
What cost on freeing up my architects from chasing this nonsense, to doing ‘real’ work on the platform? We were ahead of the game in cloud – a flag bearer with _______ (one of the first mega IaaS offers), and this stunted our growth for 12 months while we sat on this and struggled to move forward. Good architects ended getting frustrated and leaving – disruption, rehiring, etc. All to save $0.25 per GB. By the time we got back on track, we’d lost our technical edge in the market.
It is the reason why I always keep this picture (below) in my slide decks when talking about COTS – it is a great option to have, when you go in with your eyes open – and ECS is AWESOME. I talk to SP’s all the time about it – COTS without the risk.”
Interesting… I would argue that EVO:RAIL shares the model with ECS as well – software + COTS hardware packaged as appliances. There are rack-scale efforts down the hyper-converged paths within VMware (EVO:RACK) and EMC (still top secret) that also follow a similar path.
The opinions expressed here are my personal opinions. Content published here is not read or approved in advance by Dell Technologies and does not necessarily reflect the views and opinions of Dell Technologies or any part of Dell Technologies. This is my blog, it is not an Dell Technologies blog.