For global banks wanting to get accurate, cross-country views of customer relationships (and comply with banking regulations), the ability to store all customer data in one database across multiple geographic regions has long been a Holy Grail. All transactional and historical information about each customer would be stored here, so if a customer is trying to withdraw $20,000 from his account in London but just withdrew $30,000 from a different account in New York that morning, the bank would see that.
According to TransLattice, the Elastic Database it's announcing today is the first geographically distributed SQL database. "We've put together a system that's able to combine resources and nodes from different regions containing different data into a cohesive, single database," says CTO Mike Lyle. "We see huge benefits to users from this — performance and scaling, the reliability of the application, but also business agility and the ability to have all of their data on important transactional workloads within one data store for the very first time."
For this omni-database to work, each location needs to use the same database software. "The core of the TransLattice technology is you can deploy nodes in different regions," Lyle says. "They can be built on top of bare-metal hardware, they can run within a virtualized environment, or on a private or public cloud. We then get all the nodes to talk to one another and act like one massive database server, even though the individual components and tables are spread across geographies."
Rather than having database servers pass logs back and forth to share information which leads to latency problems, TransLattice's technology has servers announce changes and actions they're making to one another, with a recognition of which transactions depend upon each other. "Previously you'd be using separate technologies for site local redundancy, site local scaling, disaster recovery and federating data across different locations in order to maintain regulatory compliance," Lyle says. "We combine all of those into one cohesive mechanism, which results in significant cost savings from operational simplicity, but also the ability to have the data closer to end users for higher level performance and to be able to comply with regulations." The geographic distribution allows for redundancy and backup among the nodes.
One use case among financial services companies, he says, is tax management applications. "These applications are very elastic, from January to May they may be four to five times larger than they are the rest of the year," Lyle says. "There's a desire to be able to make use of public cloud during that time for the elasticity and scaling benefits."
For banks that have existing databases that don't speak to one another, TransLattice provides interoperability and "extract, transform and load" tools that let it integrate with existing databases and business systems. But much of the initial uptake is for new apps. "When you want to roll out a new product to a region, you can turn on cloud instances on top of a public cloud, you can make use of an existing virtual infrastructure or you can have physical instances on top," Lyle says.
TransLattice's technology breaks data into tens of thousands of partitions, each of which is assigned redundantly to different nodes within the system. A company can set policies for where data can be stored. The software also monitors access patterns and knows what portions of data are frequently accessed from each location. "We use that information to try to pre-position data close to the end users that are going to consume it," Lyle says. "Finally, we have a certain amount of deterministic randomness or fuzzing that we use to balance the workload between different nodes."
A 25 terabyte TransLattice database would cost about $300,000 per year (about $79,000 per node).