Interview With Haim Cohen About Data Pipelines • Full Stack Tech Radar Day 2019

Interview With Haim Cohen About Data Pipelines

Haim Cohen

Haim Cohen introduction to streaming data pipelines
Tell us a little bit about your background

I have been in the development world for 19 years, especially in the Backend field. I began my career in a startup company, mainly writing in Java, and from there I developed into Big Data and recently into the world of Machine Learning.

Was there a turning point in your career? A moment when you understand that you need to change direction or look at things from another perspective?

In one of the companies I worked for 6-7 years ago, I had to process large amounts of data. Initially I used Java, but the performance was low, unsatisfying and very far from my goals. I started looking at Big Data technologies in general and Spark in particular, which was popular at the same time. At first I was very skeptical, but after initial implementation even on a single computer, when I saw the efficiency of using all cores at once, I was very impressed. Since then I stopped developing in a classic way and moved to Big Data technologies.

What are you going to talk about at FTRD?

The technology I'll be presenting at FTRD is streaming data pipelines, which also talks about the data ingest & processing stages of data retention into fast data lake. The importance of the topic in the Big Data world is to make data immediately accessible to various users in the organization.

Are there any influencers in the field you are following?

In order to Keep up with the latest trends I follow information sources such as DZone and Meduim.

In your eyes, what is the most important thing when you look at the world of Full Stack and development?

I think that the most important thing in Full Stack development is the ability to learn and internalize changing technologies and remain open to paradigm changes.