vFabric Data Director and vFabric Postgres say: “Hello World!” :-)
This is a fascinating backstory – and I’m glad to be able to talk about it finally. 2 years ago, the exec team at EMC and VMware got together with some key R&D/CTO folks on each side and asked a fundamental question:
“How will the technology disruptors of: commodity massive multicore x86; low cost RAM; virtualization; cloud computing models; flash – how will they affect the world of Databases?”
Initially a small team of folks were tasked to come to the table with some proposals – and that was the birth of the initiative codenamed “Project Aurora”. The answers came back something like this:
- DBaaS will be important – “self service” isn’t a word applied to the world of databases today for the most part. Developers turn to strange alternatives when there isn’t a simple path for them to develop what they want…
- Traditional RDBMSes tend to not be very “elastic” – that’s going to come under pressure.
- In memory database models will start to become important.
- Scale-out, shared nothing models will be particularly important in the world of business analytics.
Interesting, that was one of the early triggers that led to the Greenplum acquistion on the EMC side (all the discussion was formative when considering if we could play in the land of Big Data Analytics for structured and unstructured data with our traditional architectures).
On the VMware side, It started things that lead to getting core developers on board, and also the Gemstone acquistion. And… the Project Aurora team started work. It’s a VERY interesting team – includes some of the key original developers at VMware – the brains behind the early days of creating the hypervisor (Monitor), and some of the key brains behind the vmkernel, vmotion and VMFS.
Now – we have the initial results of that initiative – GA and for the world to see! Congrats to the whole Aurora team!
So – what are we talking about?
- "Self-service": vFabric Data Director is designed to benefit both DBAs and Developers. It enables a self-service model that developers can go into the portal and provision/manage his/her own set of databases. For the Database Administrators, they feel safe doing this delegation (more safe than they do with the database instance model), because of the resource and fault containment that comes from the underlying vSphere isolation. That prevents any individual developer to mess up more than what he/she owns. Self-service is critical in enabling the "cloud model". With vFabric Data Director, one can actually log in either as admin, or end-user, and get different views – very similar to the multitenancy model of vCloud Director for generalized compute workloads. Database Adminstrators can decide to empower the end-user to become quite powerful over the databases and the resources they own.
- "database virtualization": What vFabric Data Director can be described as analogous to what vSphere did to servers. Where vSphere turns many physical servers into one fungible resource pool of CPU, Memory, Network and Storage; vFabric Data Director turns many database servers (running software instances) into one fungible database service. In other words, database virtualization happens on top of (and requires) server virtualization. This is worth explaining another way by asking this question: “What if you just did vCloud Director on vSphere and ran database instances?”. While it would work on a fundamental level, it wouldn’t be a virtualized database model. There are still many database server software instances running in VMs. As a user or a DBA, you still have to know/track which database goes into which database server software instance on which VM, including how to get to it. With Data Director, it becomes just one virtual database service. You don't care which db server or which VM actually hosts the particular DB. All DBs are organized logically, by orgs and logical groups and tags.
- “an included lightweight relational database”. With this release, the team is including a Postgres implementation (vFabric Postgres) that is highly optimized to run on vSphere 5. This is built right into vFabric Data Director. It leverages all sorts of great things like hot-add of CPU, memory to get a very basic form of “elasticity”.
- “built in tools for common database tasks like backup, restore, point in time images, and cloning” – the degree of capabilities out of the box in the intial release of vFabric Data Director down these lines is… impressive. There are great capabilities out of the box, and as noted in the demonstration, you can imagine a wide set of ways that this could be enhanced (example – using a EMC Data Domain dedupe target NFS datastore as an “external” backup target and for replication as needed).
- “a platform for an expanding DBaaS”. In the demo below, we show not only the capabilities today, but also that we’re already well down the path of using it not only to provision “traditional RDBMS” models like Postgres, but also external databases (Oracle), scale-out memory-oriented, shared-nothing distributed data management system models (SQLFire), scale-out disk-oriented shared-nothing analytics models (Greenplum and Greenplum Hadoop). You might also note in the press release announcing vFabric Data Director, Sybase and SAP are signing up to support… Without getting too hung up on this – the core idea is that a “Cloud DBaaS” model would be all sorts of database models.
It’s worth pointing out that in support of our joint vision – this can help enable external public cloud DBaaS models, and accelerate standing up private DBaaS models.
Rather than going on and on (and man, we could), it’s probably better to check out this demo:
Ok – now that this secret is out – I would recommend checking out CAP2153 and CAP2154 – sessions where you can find out more. Those sessions often don’t get as many attendees as they should because they are “incognito” before the launch. Also find people wearing these shirts and ask for more info:
As an interesting note: If I COULD talk about all the stuff that goes on, all the advanced projects, all the crazy R&D you can do with $2B – man, I’m not sure if people would believe me. Imagine the first part of this blog post going up in early 2009 – you would think I was high. Am I high right now? :-)
Comments
You can follow this conversation by subscribing to the comment feed for this post.