Print

In-memory data grids see 2013 as a big year



Alex Handy
Email
January 17, 2013 —  (Page 3 of 5)
Craig Blitz, senior group product manager of Oracle Coherence, the in-memory data grid cited as the most popular by Pezzini, said that Coherence is seeing increasing use in analytics, as well.

“Data grids play a key role in handling these new demands,” said Blitz. “Their distributed caching capabilities offload shared services by reducing the new number of repeated reads across all application instances and, in the case of Oracle Coherence, by batching and coalescing writes. Application objects are stored in-memory in the application tier or a separate data grid tier (or both, using Oracle Coherence’s near caching capabilities). Oracle Coherence provides a rich set of query, Map/Reduce aggregation and eventing capabilities to provide for a scalable compute platform as well. Companies are using data grids to drive real-time event-based calculations as application objects are updated, then to provide fast access to those calculations.”

Making money
Ted Kenney, director of marketing at McObject, has taken a more vertical approach to in-memory data grids. McObject now offers the eXtremeDB Financial Edition specifically targeted at stock-trading platforms and other high-volume, high-speed trading systems.

“Data in the financial space is fairly specialized market data—with things like trades and quotes—and a lot of it is time-series data,” said Kenney. “It's the same data point with different values that change at regular intervals over time. That requires a somewhat specialized approach to data management. Column-oriented databases are very good at that for fairly technical reasons. It's much more efficient to manage it in a column database than in a row. That is the heart of the new features we've added in eXtremeDB Financial Edition.”

And that shows one of the key differentiators between in-memory data grids and other Big Data solutions, said Kenney: In-memory data grids don't have to be huge. An Apache Hadoop cluster is almost worthless unless it's hosting at least a dozen terabytes of data; analyzing such data would be faster on a local system than a cluster designed for petabytes of data. Thus, in-memory data grids, like eXtremeDB Financial Edition, can still yield performance benefits when hosting smaller sets of data (in the terabyte range and below).



Related Search Term(s): Big Data, Gartner, McObject, Oracle, Redis, Terracotta

Pages 1 2 3 4 5 


Share this link: http://sdt.bz/37316
 

close
NEXT ARTICLE
Big Data winners and analytics on display at Big Data TechCon
Conference looks at Big Data success stories and what else is down the line for the technology Read More...
 
 
 




News on Monday  more>>
Android Developer News  more>>
SharePoint Tech Report  more>>
Big Data TechReport  more>>

   
 
 

 


Download Current Issue
APRIL 2014 PDF ISSUE

Need Back Issues?
DOWNLOAD HERE

Want to subscribe?