5.14.2015

Demand forecasting by means of data-driven technique

Editor’s note: this article is by Mathieu Sinn, Research Manager and Francesco Dinuzzo, Research Scientist, members of the Exploratory Predictive Analytics, IBM Research-Ireland

Mathieu Sinn
We all plan. Predicting what we'll need next week or next month. Businesses do the same. They forecast what kind, and how many of their goods and services will be needed by customers in the future. What if a machine could learn how, and forecast these demands? Maybe you, personally, wouldn't want one to. But at our lab in Dublin, we're developing machine learning algorithms for businesses, from retailers to energy and utility companies, to automate their demand forecasting.

Business planners rely on demand forecasting to find patterns from a multitude of data sources from internal and external factors. They use algorithmic models and predictive analytics to create a “demand forecast” which attempts to predict the amount of goods or services people, and businesses, will demand in the future. Retailers base in-stock management decisions like ordering and storage, as well as supply chain management, on demand forecasts. Energy utility companies use forecasting for scheduling operations, investment planning and price bidding. Now that we can integrate data from these disparate sources, our predictive analytics team is applying machine learning to improve demand forecasting accuracy and granularity.

Our team builds machines that learn from their mistakes and improve forecasting accuracy over time. Every time the machines apply the algorithm to compute a forecast, they are adjusting it to make it – and their predictions – better. Machine learning can automatically scale and adjust these predictive tasks. Instead of a domain expert building forecasting models manually, machine learning methods can combine large quantities of historical data and knowledge from the domain experts to learn relevant models automatically and with better predictive accuracy. Our analytics and systems are providing reliable forecasts for real-time operations that can seamlessly export the insights from data-at-rest into data-in-motion to support real-time operations. We see a great opportunity to use machine learning in demand forecasting and applying it across industries.


Francesco Dinuzzo
Through our predictive analytics research with energy demand forecasting and simulation we are seeing an increasing need for such data-driven and large scale forecasting solutions across industrial sectors which can also employ these same techniques.

Demand forecasting for smarter energy grids

We have been collaborating with VELCO (Vermont Electric Power Company) since last year to build the Vermont Weather Analytics Center. Its goal is to provide smarter grid resiliency and management, including renewable energy, using our energy demand forecasting systems and analytics capabilities alongside high resolution weather forecasting from the IBM Deep Thunder technology.

“Renewable energy production has a strong dependency on weather; likewise, energy demand also depends on weather. Therefore, high resolution, high accuracy forecasting will be a key enabler of the coming transition to clean energy,” said Chandu Visweswariah, IBM Fellow and Director of IBM’s Smarter Energy Research Institute.

“The project is built around a Vermont-specific version of Deep Thunder predictive weather model, coupled with a renewable energy forecasting model and an energy demand model. These models apply analytics to in-state and regional weather data to produce accurate weather, renewable energy and demand forecasts.”

We also recently published a paper with researchers from EDF (Electricite de France), one of the world’s largest energy providers, Massive-Scale Simulation of Electrical Load in Smart Grids using Generalized Additive "GAM" Models. In this paper, we describe how to simulate the energy load on a smart grid, including additional supply from renewable energy sources and demand from electric cars, to generate a more accurate energy demand forecast. Accurate load forecasts improve the efficiency of supply as they help utilities to reduce operating reserves, act more efficiently in the electricity markets, and provide more effective demand-response measures.

We concluded that by using real energy demand data, the right class of models and IBM tools like InfoSphere Streams, businesses can extract insightful models from data at rest, and deploy them directly to support real-time operations. And IBM SPSS Modeller can provide not only modelling, but visualization to help utilities quickly capture the changing trends in supply and demand, and rapidly deploy these insights to optimize the real time operations of the grid and production.


From these two energy and utilities projects, our team believes that we can replicate similar use cases across many different industries and businesses. We will share more of our findings this July at the 32nd International Conference on Machine Learning (ICML) in Lille, France.

Read more about our work with machine learning and demand forecasting on our research areas and applications page.

5.13.2015

Silicon photonics: The future of high-speed data

Editor's note: This article is by Dr. Will Green, Silicon Integrated Nanophotonics  department manager, IBM Research

IBM’s research in brain-inspired computing, quantum computing, and silicon photonics is preparing to take computing in entirely new directions. The neuromorphic chip is getting smarter, the quantum bits are being scaled out, and in the near future, my team’s CMOS Integrated Nano-Photonics Technology will help ease data traffic jams in all sorts of computing and communications systems – pushing cloud computing and Big Data analytics to achieve their full potential.

For the first time, we have designed and tested a fully integrated, wavelength-multiplexed silicon photonics chip capable of optically transmitting and receiving information at data rates up to 25 Gb/s per channel. This will soon make it possible to manufacture optical transceivers capable of transmitting 100 gigabits of data per second.

Silicon photonics technology gives computational systems the ability to use pulses of light to move data at high speeds over optical fibers, instead of using conventional electrical signals over copper wires. Optical interconnects, based on vertical-cavity surface-emitting laser (VCSEL) technology and multi-mode fiber, are already being used in systems today. But their transmission range is limited to a relatively short distance of about 150 meters. Today, large data centers continue to scale in size to support exponentially growing traffic from social media, video streaming, cloud storage, sensor data, and much more. The longest optical links in such systems can be more than a kilometer in length. As a result, new optical interconnect solutions that can meet these requirements at low cost are needed to keep up with future system growth.

How light boosts bandwidth

Our silicon photonics technology is designed to transmit optical signals via single-mode optical fibers, which can support links many tens of kilometers long. Moreover, we have built in the capability to use multiple colors of light, all multiplexed to travel within the same optical fiber, to boost the total data capacity carried. The recently demonstrated silicon photonic chip can combine four wavelengths (all within the telecommunications infrared spectrum), allowing us to transmit four times as much data per fiber. The chip demonstrates transmission and reception of high-speed data at 25 Gb/s over each of these four channels, so within a fully multiplexed design, we’re able to provide 100 Gb/s aggregate bandwidth.

A cassette carrying several hundred chips
intended for 100 Gb/s transceivers,
diced from wafers fabricated with
IBM CMOS Integrated
Nano-Photonics Technology
In addition to the expanded range and bandwidth per fiber, our new photonics technology holds several other advantages over what is available today. Perhaps most importantly, the technology’s manufacturing makes use of conventional silicon microelectronics foundry processes, meaning volume production at low cost. In addition, the entire chip design flow, including simulation, layout, and verification, is enabled by a hardware-verified process design kit, using industry-standard tools. As a result, a high-speed interconnect circuit designer does not require an in-depth knowledge of photonics to build advanced chips with this technology. They can simply follow the standard practices already in place in the CMOS industry.

This unified design environment is mirrored by our integrated platform, which allows us to fabricate both the electronic and photonic circuit components on a single silicon chip. Rather than breaking up the electrical and optical functions, we integrated the optical components side by side with sub-100nm CMOS electrical devices. This results in a smaller number of components required to build a transceiver module, as well as a simplified testing and assembly protocol, factors which further contribute to substantial cost reductions.

Performance of the fully integrated, wavelength-multiplexed silicon photonics
technology demonstrator chip. The eye diagrams illustrate four separate
transmitter channels (right) exchanging high-speed data with four receiver
channels (left), each running at a rate of 25 Gb/s.
While the primary applications for silicon photonics lie within the data center market, driven by Big Data and cloud applications, this technology is also poised to have a large impact within mobile computing. There’s a need for low-cost optical transceivers to shuttle large volumes of data between wireless cellular antennae and their base stations, often located many kilometers away. As the data bandwidth available to mobile users increases generation after generation, the number of individual cells required to support the traffic does the same. Our technology can deliver faster data transfer in higher volume and across larger areas, in order to support the inevitable growth while controlling costs.

There has been significant discussion around the 50th anniversary of Moore’s Law and about whether it has reached its end. Silicon photonics fits into that "next switch" conversation. On the processor side, there’s still a fairly consistent trajectory in terms of CMOS technology scaling – down to 10nm, 7nm, and even smaller. The role of our CMOS Integrated Nano-Photonics technology will be to reduce communication bottlenecks inside of systems, and to allow expansion of their capacity for processing huge volumes of data in real time.

Kilometer-scale data centers are emerging. Big Data and the Internet of Things are connecting people and information in ways that were unimaginable only a few years ago. IBM’s silicon photonics technology will augment that growth on the ground, into the Cloud, and beyond.

IBM scientists use the STM to image molecules in liquid

Nirmalraj in the Noise Free Lab
Since the first microscope was invented, researchers and scientists around the world have searched for new ways to stretch their understanding of the microscopic world. In 1981, two IBM researchers, Gerd Binnig and Heinrich Rohrer, broke new ground in the science of the very, very small with their invention of the scanning tunneling microscope (STM).

Like no instrument before it, Binnig and Rohrer’s invention enabled scientists to visualize the world all the way down to its molecules and atoms. The STM was recognized with the Nobel Prize in Physics in 1986 and is widely regarded as the instrument that opened the door to nanotechnology and a wide range of explorations in fields as diverse as electrochemistry, semiconductor science, and molecular biology.

In a new paper appearing today titled "Capturing the embryonic stages of self-assembly - design rules for molecular computation" in Nature Scientific Reports, IBM scientists in Zurich are adding a new chapter to the STM's legacy by reporting on a new methodology for the reliable extraction of incredibly high resolution images of the swarming behavior of molecules in situ or which translates to "on site".

I spoke with the lead author of the paper Dr. Peter Nirmalraj about his research and what's next.

Q. Why has it taken so long to use the STM for in situ imaging in liquid? 

Peter Nirmalraj (PN): In situ STM imaging (imaging in liquids at room-temperature) has been around for the last 20 years, however extracting high resolution data comparable to UHV STM standards has remained a challenge, mainly due to the electrochemical congestion and external noise interference occurring when performing such measurements in liquids at room temperature.

Q. How did you come up with the idea to make it work?

PN: We have previously demonstrated how to control molecular motion for stable electrical readouts. 


In the current work we employ the same STM tool capable of measuring dynamics of individual molecules at the liquid-solid interface, with excellent spatio-temporal sensitivity. In particular we chose electrically inert and low vapor pressure liquids as the medium, that does not interfere with the tunneling mechanism. The entire setup is located within our state-of-the-art noise free laboratories located in the Binnig and Rohrer Nanotechnology Center which immensely aids such nanoscopic measurements in liquids.

The question we asked ourselves, can we record in real-time the evolution of an organic molecular layer and in particular capture the very early-stages of self-assembly rather than only imaging a fully packed thermodynamically stable molecular matrix? 


Information obtained from such carefully designed experiments can and have provided deeper insights on the fundamentals of molecular self assembly, which is central in molecular computation and in refining step-by-step equilibration rules for agent-based algorithms (algorithmic self-assembly).

Q. Why did you choose C60 molecules for the molecular solution?

PN: Fullerenes are a well studied class of molecules. This makes it easier to calibrate our STM tool as the dimensions and intermolecular packing arrangements of this system is known from both theory and previous STM studies. More importantly fullerenes are compatible with our solvent of choice, n-tetradecane and form a stable molecular solution (molecules are well dispersed with minimal treatment and does not aggregate in solution). Currently we are exploring other molecules with different dimensions from porphyrins to ferrocene.

Q. Could this technique be used for healthcare, to image bacteria and and viruses for example?

PN: Yes, biomolecules which are generally not highly rigid and less conductive in nature can be imaged using STM when drop casted from a liquid-phase onto conductive metal surfaces. However, the solvents that generally support biomolecules are polar which necessitates an additional step for insulating the STM probe, minus the apex, to minimize parasitic currents during tunneling. Such measurements also need to be performed under closed-liquid cells with larger volume to contain rapid solvent evaporation.

Q. What's next for your research?

PN: The next step would be to investigate naturally occurring pattern formation and understand better the local structural dynamics of organic molecules such as porphyrins as they evolve from a disordered phase to an energetically stable stage and verify their internal response to external electrical pulses. 


Large data sets generated from such studies will then be directed towards constructing predictive algorithms for assembly of new 2D and 3D molecular configurations and for testing basic logic gate operations in molecular layers.

Follow the authors on Twitter at @stm_pnn and @HeikeRiel

This research is partly funded by the Marie Curie Actions-Intra-European Fellowship (IEF-PHY) under grant agreement N° 275074 “To Come” within the 7th European Community Framework Programme.

5.08.2015

Deep dive insights into Swift

Editor’s note: This blog entry is by IBM Fellow Michael Factor and IBM storage research engineer Dmitry Sotnikov from IBM Research – Haifa. The work was done together with Yaron Weinsberg from IBM Research - Haifa.

Michael Factor, IBM Fellow
When companies deploy a system, they have specific performance, durability, and cost objectives in mind. So, before physical deployment, they will run models and simulations to get a close approximation of how the system will meet those objectives. But after deployment, the system must be checked. This holds true for even open source systems which have become more and more popular because of their flexibility in terms of available, interchangeable hardware and software. Our team at IBM Research-Haifa, building on open source tools, has been gaining experience and developing techniques to observe and monitor one of the most-popular options, OpenStack Swift. It's the leading open source object storage system that runs in public and private clouds.
 

Let’s dive into Swift and what we learned from the data collected during monitoring.

Dmitry Sotnikov, IBM storage engineer
Increasing system complexity means increased monitoring complexity — since huge amounts of data need to be analyzed to find out what’s really going on. That’s where our work comes in. Our methodology  uses an open source toolbox and enables understanding the behavior of Swift clusters by examining the data collected during monitoring.

Today, performance monitoring and troubleshooting of a running cloud-based object storage is as much an art as a science. Although there are a plethora of open source monitoring tools to gather system metrics, the real challenge is how to use them to find the root cause of a problem.

We developed a general, open-source-based, step-by-step methodology to understand performance bottlenecks in a Swift system. Our solution uses standard tools including Logstash, collectd, StatsD, Elasticsearch, Kibana and Graphite. It also includes an additional simple Swift middleware we developed to gain further insights into the source of system bottlenecks.  
Swift monitoring flow
Our methodology helps validate the correctness of a Swift cluster configuration, and identifies which important data should be presented in the visualization of the Swift parameters together with the system’s statistics. This visualization helps users gain a better understanding of the internals of Swift and its behavior; it also enables them to see potential problems and misconfigurations.

For example, if validation of Swift’s network configuration is required, for instance to understand unexpectedly low performance, , it can be done by using our methodology and the open source toolkit on which it is based. This can be seen in the following charts, which present the network utilization between the Proxy and the Object servers, as well as the Proxy’s public network utilization for a write only workload. Based on these charts one can easily validate that all the data received by the Proxy is replicated three times and sent to the Object servers. 
Data visualizations
  
At the OpenStack summit on May 20 in Vancouver, we will be demonstrating the results obtained from our approach, used with an internal deployment of Swift. You are welcome to join us at 1:50 pm, room 109.

5.07.2015

Superlubricity Gets Quantified for the First Time

Authors of the Science paper: A. Knoll, E. Koren, C. Rawlings, E. Lörtscher
(U. Duerig is missing)

Leonardo da Vinci first discovered the sliding rules of friction hundreds of years ago, and since that time our scientific understanding of the force is well known — that is until you reach the nanoscale.

For example, it was only recently that scientists were able to verify the lack of friction, known as superlubricity, in graphite at the atomic scale. This understanding is an important step as scientists around the world, including those at IBM, continue to investigate devices known as Microelectromechanical systems, or MEMS. 


MEMs are miniaturized mechanical objects like tiny gears, pumps and sensors which could be used for any number of applications, including targeted drug delivery, blood pressure sensors, and microphones for portable devices.

While several aspects of superlubericity were published in 2004, a quantitative description of the different interacting forces, including both friction and adhesion, didn’t exist — until today.

Thanks to the paper, "
Adhesion and friction in mesoscopic graphite contacts," appearing today in the peer reviewed journal of Science, IBM scientists have not only uncovered the quantitative secret to understanding friction in such materials, like graphite, they even came up with a way to measure it.

The paper details how, for the first time, IBM scientists can mechanically measure the tension and friction of two sliding sheets of graphite. Since this was previously poorly understood and just based on theory, the team also had to derive a mathematical expression to illustrate what they were seeing with their atomic force microscope (AFM), which turned out to be in excellent agreement with theoretical models.

The answer the team discovered is that the friction is randomly determined in nature and directly based on the interaction between the proportionate lattices of the material, in this case graphite.


As reported in the paper:
Superlubricity_4
SEM image showing several bearing devices pointing to different directions.

 

The results suggest that the friction force originates from a genuine interaction between the rotationally misaligned graphite lattices at the sliding interface. This is remarkable because it is a well known empirical fact and also predicted theoretically that fractional scaling is an extremely fragile interface property which can only occur if the lattice interaction is not perturbed by defects or contaminations.

Adhesion and friction are critical to understand with MEMS
. At this scale, energy dissipation and wear have a huge influence on how the devices are designed, particularly with what materials they are designed with, due to the large surface to volume ratio.

IBM scientists have been motivated by this challenge particularly around the use of carbon based materials like graphite, which are very promising for MEMS applications. A general phenomenon associated with 2D layered materials like graphite is the strong suppression of sliding friction and striation forces or superlubricity.

Using an AFM, the team sheared the surface of the graphite and then took measurements which revealed the mechanisms of friction and adhesion.

IBM scientists are interested in using superlubricity in the possible design of a new transistor, commonly referred to as the "next switch." The lack of friction could make a transistor that generates less heat and therefore uses less energy.

The team also hopes that these findings will someday help other scientists design energy efficient MEMS devices.

5.05.2015

Multimedia and Visual Analytics

Special Issue of the IBM Journal of Research and Development

Current issue of IBM Journal of R&D
With the increasing growth of bandwidth and storage space, along with the proliferation of mobile devices, the information we generate and consume is becoming much more visual. On the content-generation side, multimedia data (image and video) is being generated and consumed at an extraordinary pace. 

When consumed, one expectation is that complex information will be packaged and delivered visually and enable data scientists as well as business users to interact in real-time with the data. Additionally, visual analytics improves the comprehension, appreciation, trust in, and communication of analytics results and insights for both business and scientific users.

As the IBM guest editors for this issue Aya Soffer and Hui Su note, innovative methods, technologies, and systems are needed to manage and extract meaningful insight from this data, due to this continued growth of multimedia content. Similarly, in the area of visual analytics, new systems and techniques are needed to handle large datasets, to allow intuitive interfaces that enable users to explore and interact with Big Data and collaboratively develop new insights. 

Our latest special issue of the IBM Journal of Research and Development includes papers on a variety of analytics and graphical approaches that facilitate the management, processing, and understanding of multimedia and visual information. Specific topics highlighted in the issue include medical, retail, biometric, and social-network areas, as well as neurosynaptic cores and much more.

Editor-in-Chief
IBM Journal of Research and Development

4.30.2015

Storlets: From research prototype to open source technology



Researcher Eran Rom
Editor’s note, this article is by Eran Rom, storage researcher at IBM Research – Haifa.

In a previous blog post devoted to storlets, IBM Fellow Michael Factor highlighted how storlets can be used to turn a software-defined object store into a smart storage platform. This is done by allowing the computation to run near the data, rather than bringing the data to the servers doing the computations. Michael's post addresses the potential of storlets in cost reduction as well as in enabling new services.

While in this post we want to concentrate on the technology itself, here are a few things that have happened with storlets since they were a research prototype.

Storlets advancements
  • Storlets played a central role in a first-of-a-kind solution, called Active Media Store, developed with Radiotelevisione Italiana (RAI).
  • The work with RAI was presented at the Paris OpenStack Summit, and adopted by OpenStack as a "superuser story".
  • A reference implementation of storlets is now publicly available under 'Open-I-Beam' in github.
We started interactions with the OpenStack community on the question of ‘if and how’ to add storlet support into the official Swift release. We will be having a design session discussion on this topic at the upcoming Vancouver OpenStack Summit. We encourage all interested parties to attend.  

Storlets on OpenStack Swift

Our implementation of storlets is integrated with OpenStack Swift. Swift is an open source implementation of an object store and is behind several public object store services, including the IBM SoftLayer object store. Part of the idea behind storlets is to provide a flexible means of extending the function of the object store, by giving Swift users the ability to upload code to be executed near the data.

Running user-written code inside the storage system calls for adequate security and isolation measurements. This is where Docker comes into the picture. Docker is a popular Linux container management framework.  Linux containers (LXC) are similar to virtual machines, only instead of virtualizing the hardware they virtualize the operating system.

In addition to providing security and isolation, Docker has tools for packaging and deploying executable images. Using Docker our implementation allows the user to upload the storlet’s code, along with a tailored image where the storlet will execute.  Thus, if a storlet relies on some non-trivial software stack, that stack can be packaged into a Docker image and deployed in a Swift cluster, to be later used for executing the user's storlets.

Writing a storlet involves implementing a single method Java interface called IStorlet. That method - called invoke - has two major parameters: an input stream and an output stream. The input stream is used for consuming the data of an object on which the storlet is operating and the output stream is used to write the results of the storlet's computation. Storlets work in a streaming fashion, i.e., they start outputting data before reading all the input data. This is due to the synchronous fashion of storlets’ invocation as part of the upload or download operations as described next.

Once the Docker images and storlets are deployed, they can be invoked on data objects in Swift. Storlets can be invoked in two ways:
  1. Invocation during object download. In this case the storlet transforms the object before it is returned to the user. This can be used for scenarios such as pre-filtering data being retrieved for an analytics engine or as an on-the-fly resolution reduction when downloading to a mobile device.
  2. Invocation during object upload. In this case the data stored is transformed from the data PUT by the user. One example use case is metadata enrichment, where a storlet can tag a data object with additional metadata while it is being uploaded.
In our current implementation, invoking a storlet during the upload or the download of an object involves adding a single header to the upload / download request.   This header identifies the storlet to execute on the object that is the target of the request. Once the request is received by Swift, a pluggable middleware intercepts the request at the appropriate point: During download this point is when the response data is on its way back to the user, and during uploads this point is along the input path of the request. 

Our code then routes the data to the storlet using file descriptors that are passed over Linux domain sockets to the storlet code running inside the Docker container. Other than these file descriptors, the Docker container has no access to any I/O device. This means the storlet's code has no network access and no access to Swift's own disks. All I/O is done via the file descriptors provided by our Swift plugin.

For those interested in more, github has comprehensive documentation that includes storlet samples and automated installation and configuration, which gives you a quick start.