It’s funny, I’ve almost started to think of the year as having these seasons: “Megalaunch –> EMC World –> Cisco Live - > VMworld SF -> Inter-VMworld integrum (aka breathe) –> VMworld Barcelona…” On other days, I feel like the seasons are: “Q1, Q2, Q3, Q4, Breathe” :-)
…Then I spend a little time with my family and friends and am reminded of the real seasons, and the fact that the world is spinning on it’s axis and rotating around it’s orbit (throw in a 65,000 year precession calendar and our rotation around the Milky way if you’re that type of person :-)
Here we are in Fall (aka the “Inter VMworld interregnum” or “mid Q3” :-) and I thought it would be good to do another “Jobs” post.
Lots of GREAT jobs here, things begging for the right passion, the right skills, the right people. In each case, I’ll point to a contact (remember EMC employee’s email format is “firstname.lastname@example.org”) or a specific requisition number if I have it (and you can go to the EMC “Jobs” site and look up the req).
My advice – don’t be lame and just send an email, or a CV. The person you’re applying for will assume you have a solid LinkedIn profile (and will look it up). Point to a public example of your work, your passion. Point to your GitHub portal. Point to your community activity. Point to your blog. We live in a new social era – don’t apply the way you would a decade ago! STAND OUT!
These span field (SE), tech marketing, IT, and services roles… Personally, I’ve found EMC to be a great place to work, a great place to make a career. It’s not for everyone, but many dig it!
Read on for more!
Over the weekend, I saw this blog post about the disruptive XtremIO 2.4 –> 3.0 upgrade.
First of all, yes, it is accurate to call the XtremIO 2.4->3.0 upgrade a disruptive operation. When customers using XtremIO 2.4 migrate to 3.0, there are big changes, big improvements. Think 2x better performance. Think 2-4x higher utilization due to compression.
We continue to support 2.4, so if customers want to sit tight and avoid the upgrade, they are entitled to do exactly that – they will continue to enjoy all the XtremIO awesome they are loving. To get all the new goodies above (at no extra cost!) they will need to pull the data off, upgrade, and then bring the data back – and and our partners are always ready to help them do it.
All of the above is why so many customers are picking XtremIO, why Gartner put it here, and why it has become the fastest growing revenue storage product EVER.
But why is this particular upgrade disruptive? Why do disruptive storage events ever happen anymore?
Storage is persistent. This is patently obvious – but in general people don’t think through what this means (and why should they – they aren’t engineers!).
Anytime you touch two core parts of any persistence architecture: 1) layout structure layer or 2) metadata mapping layer – it means a disruptive migration to some degree.
BTW – it’s funny looking at some of the people who commented on the blog critically… who themselves as vendors are going through a huge disruptive event of their own! All the more reason to not listen to people who go negative, and trust those who disclose warts, and partner with you to work through them.
Disruptive upgrades affect ALL persistence architectures, all vendors at times. If you’re curious about the engineering reasons why (helpful to predict whether any future upgrade of any stack will likely be NDU or DU), as well as more on this particular XtremIO upgrade (and some more roadmap) read on!
Here’s the presentation I used at VMworld 2014 in San Francisco. Every year, there is a “everything including the kitchen sink” session I tend to do, which is a blend of “big topics” (here, it was “what’s going on in Converged Infrastructure – including “Hyperconverged”) and “tactical topics” (what’s up with ScaleIO, Recoverpoint for VM, ViPR, etc).
Download, use, enjoy! BTW – many are looking for the softcopy of the “Converged Infrastructure taxonomy” I used here… It’s in this deck!
It’s an interesting thing that I’ve been seeing more and more – customers looking to leverage commodity off the shelf hardware and opensource software stacks.
Personally, I LOVE this idea. I think that “Software Defined” means that whenever you CAN do something with software on commodity hardware, you should (and have said this publicly for years – see this here: “Where the physical stuff that does work (data plane) can be software on commodity hardware, do it that way.” Even earlier in 2012 here)
While EMC is associated “mentally” with big arrays dressed in “death star” black with kick a$$ Cylon LEDS (and we do indeed do that :-) like VNX and VMAX, we completely embrace the “software on COTS hardware” market – with ViPR and ScaleIO.
What HAS surprised me is how many customers say “I was software + COTS… but I want YOU to provide the COTS hardware, at COTS HW + Software economics and bundle it, and support it as an appliance”.
It’s a little counter-intuitive, because by definition it narrows the hardware (from the whole ecosystem or all of OpenCompute options). However, it makes sense when one thinks of many enterprises who lack the “bare metal as a service” organizational function that the Hyper-Scale folks have. This resulted in things like EMC ECS (more here)
I got a really interesting email from one of my EMC SE colleagues yesterday on this topic (non-edited but censored to protect the innocent):
“I can totally vouch for the ‘Customer Service’ aspect for anyone wanting to run COTS. Perspective might be useful… My last role, I ran Development and Engineering for _________________ (service provider) – we got pushed into a bad place. We ran a private and public cloud service for _____ (customer) internally, and for enterprises in EMEA and APJ. Despite our awesome network, and DC’s, the support function was bulky, inflexible and stuck in the old ways. Add to that a Product function that spent more time in ________ (a city) than where our market was…. Anyway the Product function wanted the lowest cost storage as was possible for the market in Asia – we deployed Supermicro kit, with ________ (a vendor SDS block storage stack) and ______ (a vendor SDS object stack) providing Block and Object. On paper, it was excellent, well designed and it scaled, and worked well with Cloudstack and Swift (we were right in there at the beginning….). On the balance sheet, we came in significantly less on the cost of EMC or NetApp – which with hindsight – I saw the EMC Sales guys do their best impression of a rabbit in headlights, before pitching the wrong kit – they didn’t understand SP’s back then. (Ed: :-) )
Even then ‘Customer Service’ aspect we believed we had covered – the software was robust, designed with the Chaos Monkey in mind, we kept spares onsite in ____ (city in the americas), _______ (city in EMEA) and _______ (city in APJ) – easy, right?
If Ops staff could be consistent in following processes, they’d be engineers… but they weren’t, they never followed process, never marked things, we ran out of spares, chasing SM for replacements, organizing pickups and deliveries, monitoring of the backend ended up with us having DC staff doing a floor walk every 3 hours – the bundled software was so unreliable.
I ended up having ___________ (a vendor SDS block storage stack) hire guys to leave onsite in ________ (city) to keep their eye on operations. It was a management headache, a support nightmare, and despite it being cheap to procure, and 99.999% uptime, we realized the issue was with our ‘success metrics’.
What cost on freeing up my architects from chasing this nonsense, to doing ‘real’ work on the platform? We were ahead of the game in cloud – a flag bearer with _______ (one of the first mega IaaS offers), and this stunted our growth for 12 months while we sat on this and struggled to move forward. Good architects ended getting frustrated and leaving – disruption, rehiring, etc. All to save $0.25 per GB. By the time we got back on track, we’d lost our technical edge in the market.
It is the reason why I always keep this picture (below) in my slide decks when talking about COTS – it is a great option to have, when you go in with your eyes open – and ECS is AWESOME. I talk to SP’s all the time about it – COTS without the risk.”
Interesting… I would argue that EVO:RAIL shares the model with ECS as well – software + COTS hardware packaged as appliances. There are rack-scale efforts down the hyper-converged paths within VMware (EVO:RACK) and EMC (still top secret) that also follow a similar path.
Curious for people’s thoughts of course!
Every year for VMworld, we produce (purely for fun) a gag video – they are fun, and everyone let’s their hair down :-)
All of the people in the videos are real people EMCers or partners, in most cases, brothers and sisters in my EMC SE family.
In the past we’ve done:
And… this year we got LOTS of input. People really wanted us to back off the rap, maybe embrace a more disney movie kind of vibe, and hey, Frozen’s Let it Go was just huge, so… Well – you can judge for yourself. We give you: Hardware for WHAT!
For what it’s worth – Cisco is a great partner – and it shows in this video, regardless what anyone says. Thank you guys for helping us sponsor this year, and Dom (leads the Datacenter PSS team – so UCS/Nexus technical specialists), for being willing to be an idiot with us on camera.
.. And for what it’s worth, EMC is a GREAT PLACE TO WORK!
[UPDATE – Sept 10th, 8:54ET – many have asked for the softcopy deck that has some of these taxonomies – I’ve posted it here]
Before I go much farther in talking about EVO:RAIL, I want to quickly make a black and white statement – based on what I expect analysts/press misunderstanding today’s announcements (we’ll see if they do this): VMware is NOT getting into the hardware business :-)
Looking at the PR, and even the way EVO:RAIL is positioned as a “product” (to me, it is a OEM program, and a VMware software product - EVO:RAIL manager - that helps OEMs build hyper-converged appliance products) I can see why people may be confused. Let me try to make this clear:
Ok – let’s put aside what the analysts/market read into it, and let me put my own thesis on it…
I’ll make a statement that some may find controversial (but I don’t think is controversial at all): I think almost every customer should go the converged infrastructure route. Frankly, I think this is basic: there is little value about assembly and hyper-optimization of infrastructure unless you are a hyper-scale player. It’s also equally basic on a more important level: the more you can do to simplify infrastructure and let you direct more time and resources to the more important parts of getting to IaaS (Management/Automation/Self-Service/Business Management layers) and even more importantly to the PaaS layer, the better – so long as the CI architecture you pick meets your particular the app requirements.
The growth rates we’re seeing in the market of various converged infrastructure (not just ours) offers in the marketplace suggest people are voting with their dollars, and their voting in the direction of my statement above.
So – what IS controversial? That there’s no ONE “right way” to “do converged infrastructure”. Gasp! Yes, it’s true :-)
First, here’s MY definition of converged infrastructure (“CI”" for short):
“CI is always a method of using infrastructure where compute, network and persistence are treated as a SYSTEM rather than as COMPONENTS”.
This is – at it’s root, the same story as always:
One of the most unexpectedly popular ideas I put out there was that there was a taxonomy (or a way of grouping and ordering the seeming chaos) for all the storage ecosystem into “Four Phylum of Storage Architectures” or the “Four Branches of the Storage Tree of Life” (that blog post is here)
I think there IS likewise a “converged infrastructure phylum”, and it looks like this… READ ON!
As always – a VMworld always triggers lot of serious, long, heavy, technical reads coming, but equally – VMworld always has a lot of fun :-)
This is the first of several!
#v0dgeball was awesome – congrats to team SMP – a great partner – who took the #1 spot! It’s amazing – the coolest things you start, they eventually become bigger than you – and live on. #v0dgeball started with Chris Hoff and I smack talking playfully 5 years ago, and now has become huge. 16 teams – and more than $20,000 raised for the #WoundedWarrior Project! Thank you all!
But VMworld is always the right time to release some fun silly videos – and we have several in store :-) Here’s the first!
World – welcome to “Life in the Flash Lane”. This video is MADE by the one and only Sam Maraccini who has clearly found his alter-ego as “storage dude stuck in the 70’s”, and the supporting cast with their incredible subtle and awesome 80’s fashions and hair spray. Oh – and, clearly our EMC CMO Jonathan Martin coming out of what clearly was an awesome van (down by the river) doubling as a hotbox – because we take ourselves VERY seriously.
What a gas :-) I hope you guys like it as much as they clearly had making it – and we have some other surprises in store – we will throw some of our tropes out the window. Oh, and #RememberRuddy!
A big part of “software defined” is about data plane on commodity to be sure. But, just as big is the part about control plane abstraction. In the storage domain, a big part of the open EMC SDS strategy continues to be to create a sofware-only control abstraction for ALL storage. I mean this in every way:
The other upside is how easily you can “bolt in” new platforms without changing other things you do.
Put it this way… EMC has always believed in a “right tool for the job” approach to storage (including when the job is “give me one thing that does everything moderately well” – and that’s a VNX). The upside to this philosophy is that it is the BEST way to meet workload and customer requirements.
The downside – the more storage types you need – the more complex management gets - UNLESS you abstract the control plane from the storage platforms themselves.
Here’s an example (and a real world customer example):
This is why it’s always a good idea to use something (and software only) in between platforms and your automation tools. Here’s a sneak peak at the next ViPR controller release that has the XtremIO integration!
BAM. Immediately XtremIO not only is the best AFA out there, but it’s the AFA with the most integration with all sorts of heterogenous tools (including the vRealize Suite – in this case the elements formerly known as vCAC and vCO).
Anyone using ViPR doesn’t need to do a thing to their processes and tools to leverage XtremIO as a new storage option for workloads that will benefit from it.
This is why more and more customers are using ViPR (check out the photo from the SAP team here – and you’ve got to love it when a customer sends this sort of photo)
BTW - thanks to the great EMC team that made the demo: Todd Day, Chris Rigano, Rich Barlow, Mike Lee, Alex Candelaria, and of course the awesome EMC ViPR and XtremIO teams!
So… are you using ViPR? Why not? What are we doing right/wrong?
Lots of updates on the topic of “Software Defined Data Planes”! Since the last VMworld, where VSAN and ScaleIO were “new products” – they’ve had several quarters in the marketplace together (really, VSAN was “new” as it GAed at the beginning of the year, and ScaleIO has been GA for about a year longer).
As they both became generally available, there was a LOT of discussion between VMware and EMC at the Federation level – because, to be frank, these two products have some material overlap, and we knew that they would inevitably compete in the marketplace to some extent. So – where are we a few months in? Read on for updates, demos, and a sneak peek at something really, really cool!