Fujitsu develops stream aggregation technology for processing data


As the amount of data available for analysis increases and computation times lengthen, it becomes more difficult for an operation to manage updates. Improving the frequency of updates when aggregation times are lengthened has proved a challenge, but Fujitsu Laboratories has developed stream aggregation technology it believes can solve the problem without redoing computations or re-reading a variety of data types that change over time.

The company believes that even with high-frequency updating and long aggregation times, data can be processed 100 times faster than it is currently. This technology may improve both large volumes of batch processing and the processing of streaming data.

Fujitsu will share details of this technology at a special workshop lecture of the Special Interest Group on Software Interprise Modeling, of the Institute of Electronics, Information and Communication Engineers on Nov. 30, at the Takanawa campus of Tokai University in Japan.

Fujitsu Laboratories has developed a fast stream aggregation technology for long aggregation intervals and frequent updates, based on a combination of the two technologies; rapid pattern matching and snapshot operation management technology.

Rapid pattern matching efficiently and directly identifies relevant items from an incoming stream of data, rather than temporarily accumulating all input data in the memory. Next, it performs an extraction process of the items needed for aggregation to extract data. Snapshot operation management technology returns computation results to deal with data types that change over time, without re-reading or re-computing data.

For more:
- see the Fujitsu release

Related Articles:
Don't get boxed in by the "box syndrome"
Android surges to 75 percent mobile OS market share, says IDC

Filed Under