Meet an IBM Scientist: Winnie Tatiana Silatsa Saha

Who: Winnie Tatiana Silatsa Saha
Location: IBM Research - Zurich
Nationality: Cameroonian

Focus: Electrical Engineering

“I’m currently working on developing a new high performance, low cost, terahertz imager for passive imaging systems based on CMOS batch manufacturing processes. It's a part of an EU project called TeraTOP.

Her advice for young women:

“Don’t believe in stereotypes. I don’t feel comfortable with anything else but science and I don’t know why any girl should feel scared about going into science, math or engineering. 

“I think its very encouraging that IBM’s CEO is a woman who studied electrical engineering. This is a great motivator for the next generation of female scientists and engineers. There is a stigma that engineers don’t have what it takes to become successful managers and this is a misnomer.

“I feel incredibly comfortable working here at IBM, but I have to admit that after my professor encouraged me to apply, I hesitated. IBM is a global company with a renown brand and brilliant scientists, so I expected a very closed environment. But it’s just the opposite here in Zurich. It’s very open and I’m learning a lot.”

When Winnie isn’t in the lab, she is learning Spanish, her seventh language, after French, English, German, Cameroonian Pidgin, Yembam and Bamoun. She is also on the University of Dresden judo team.

Check out Winnie's profile here.


Profile of a scientist: Rei Odaira

Location: IBM Research-Austin

Nationality: Japanese

Focus: High Performance Computing Compiler Optimization

Compilers turn a computer’s coding into executable programs. So, the faster a compiler works, the faster a program runs. Compilers, though, have to work within a computer’s constraints. The number of processors, the amount of memory, even the programming language all influence a compiler’s effectiveness. So, engineers like Rei Odaira develop ways to optimize them.

Rei joined IBM Research-Tokyo 10 years ago to optimize System z mainframe compilers. The team he joined invented the technology in the mid-1990s, for the High Performance FORTRAN compilers. And in 1995 – while Rei was still at the University of Tokyo – they built the Java virtual machine, and the Java just-in-time compiler, that has been embedded in every IBM software product that uses Java, including WebSphere. Rei took notice of the world-famous work happening only 20 kilometers away from campus, and wanted to be a part of the group.

“College students at the University of Tokyo cannot declare a major for the first two years of school. They can only choose ‘sciences’ or ‘liberal arts’ as a general areas of study. I originally wanted to study mathematics and physics when I entered the university. But this was also the time of the Internet boom of mid- and late-1990s. So, I began learning about things like Linux. And also, IBM’s just-in-time compiler was the fastest in the world at that time. And because that team was in Tokyo, my classmates and I knew their work very well – from papers they published, to conferences they attended.

Power 8’s greatest advancement is in its CAPI interface. It allows others to build new systems on top of Power 8. For example, CAPI can be made to quickly access and analyze unstructured NoSQL data stored in Flash memory.Making Power Open to the Enterprising Masses
“My first job [at IBM] was to optimize the compiler on the System z mainframe. Compiler optimization is all about getting as much performance out of hardware as possible. It was a perfect match for my computer science background, and love for working on hardware.

System z, though, is an interesting challenge. It has 16 registers (places on a computer processor where data is kept), while other machine architectures, like our Power systems have several more (so, have more ways to spread out and execute a workload). But our algorithms improved z’s middleware efficiency by 3 percent – a major breakthrough in 2005, considering how fast the mainframe already was, and the limited ways to optimize it.

On OpenPower and moving to the US

“I moved to Austin in January of this year to manage a systems team working on the Power system’s Coherent Accelerator Processor Interface (CAPI) Flash. The opportunity actually came up last April, when one of my managers – while on a visit to the Austin lab – was asked about who could manage their local team working on OpenPower optimization. And my name came up. My family thought it was a great opportunity, so it wasn’t a hard decision to say ‘yes.’

“Now in Austin, my scope broadened from System z compilers and run time programs like Java and Ruby, to developing ways to exploit CAPI Flash – an accelerator that can access and analyze unstructured data stored in Flash memory. This means it can optimize workloads on the Bluemix cloud platform and the SoftLayer infrastructure it runs on, to do things like process genome sequence data in a few hours, versus the day or so it takes, now.”

Tips on the transition from school to industry

“University studies give you the skills and knowledge about a topic. Computer architecture and algorithms in my case. Coming to IBM meant picking up new skills, like writing papers and making presentations. And also learning how to contribute to products – something you don’t do in school.

“At first, I had these ideas of how to change a product. They didn’t go over too well because it meant implementing a completely new thing, like an algorithm or function, in the development and maintenance of that product. I had to learn how to work in the industry, balancing business needs – and the many parts that come together to make a product work – with how my own ideas could make an impact.

“But at IBM you also have plenty of opportunities to partner with academia. I write papers for industry and academic publications. And am a member of, and on the 2015 organizational committee for the International Conference on Principles and Practices of Programming on the Java platform (PPPJ). I was also an external review committee member for Programming Language Design and Implementation (PLDI) earlier this year, and am now working as an editorial committee member for the Information Processing Society of Japan’s Transaction on Programming.


IBM POWER8 Technology

Special Issue of the IBM Journal of Research and Development

Over the years, IBM Power® systems have played key roles for both commercial enterprise computing and high-performance computing. Our latest special issue of the IBM Journal of Research and Development emphasizes new hardware and software approaches that are foundational for the IBM POWER8 technology. This technology provides a data-optimized design that is well-suited for analytics and other Big Data problems of today, as well as cloud-based workloads across multiple environments.

As noted in the Preface for this issue, generation after generation of Power processors have introduced advanced and highly effective techniques, such as large-scale non-directory-based symmetric multiprocessors (SMPs), flexible and dynamic partitioning of resources among operating systems executing in the same SMP, simultaneous multithreading (SMT) multiprocessor chips, and high-frequency processor design.

The POWER8 technology is the successor to POWER/POWER7+ technology, with a focus on improved thread and core performance, SMT, reliability, larger caches, transactional memory, field-programmable gate array support, vector processing, accelerators, increased parallelism through additional cores, and much more.

Clifford A. Pickover
IBM Journal of Research and Development


Hybrid storage for the hybrid cloud

By Arvind Krishna, Senior Vice President and Director of IBM Research

IBM Research has long played a pivotal role in the evolution of storage. In 1956 IBM researchers helped to create RAMAC, the first magnetic hard disk. They also developed the giant magneto-resistive head for disk drives in the 1980s that still serves as the basis for all of today’s disk drives. In 1995 they helped IBM to win a National Medal of Technology for rewritable disks. Now it’s time for us to take another big step.

With data growing at 50 percent per year, IBM is investing $1 billion to manage this digital wellspring with storage software for the hybrid cloud. This five-year investment includes research and development of new cloud storage software, object storage and open standard technologies such as OpenStack.

While perhaps not quite as captivating as watching RAMAC’s massive spinning disks must have been, the archiving capability for this new technology, called Spectrum Storage, did win an Emmy back in 2011. If you like having digital media instantly available on everything from your phone to your smart TV, you have IBM’s “Spectrum Archive” and our researchers to thank.

Living up to our heritage in developing industry leading technology for IBM products, IBM Research played a significant role in this announcement by inventing four out of six IBM Spectrum Storage offerings and contributing heavily to the other two.

Unboxing the full spectrum of data storage

We predict that storage software will overtake storage hardware by 2020, by which time it will have to manage 40 zettabytes (40 sextillion bytes) of data. We believe most of that data will be in hybrid cloud because of the flexibility it offers businesses. A company that has your data, or data you want, will be able to manage, analyze, add to, and transfer it all from a single dashboard, something impossible to do today on storage hardware that sits alone in a datacenter.

The other major benefit of storage software is that it can access and analyze any kind of data wherever it lives, no matter the hardware, platform, or format. So, from mobile devices linked to your bank, to servers full of unstructured social media information, data – via the cloud – can be understood.

This technology is already demonstrating its value. For example, Caris Life Sciences is using a part of the Spectrum portfolio to speed up the company’s molecular profiling services for cancer patients. Scientists at DESY, a major research center out of Germany, use Spectrum to crunch more than 20 GB of data per second to study atomic structures.

Beyond the next five years and all of its zettabytes, software-defined-storage can help lead us to new technologies like phase-change memory (PCM), STT-RAM, and beyond. In fact, our scientists in Zurich made a breakthrough last year in the materials development of PCM, which promises to bridge the performance gap between the main memory and storage electronics from mobile phones to cloud data centers. And its unique physical properties make it ideal to serve as the memory for our work on brain chip architecture.

That’s what’s so exciting about the storage world – it’s always moving forward. As far as we’ve come in the storage evolution, the journey is just beginning. As it has done in the past, IBM Research will be there every step of the way.