Alumnus Ibrahim Haddad on Open Source, Inquiring Minds and the Jobs of the Future
VP of strategic projects at the Linux Foundation discusses open-source AI and offers industry insights to the new generation of innovators.
In 1992, Dr. Walid Keyrouz, former computer science faculty member at LAU, walked into class carrying a shoebox of floppy discs. “You guys need to check this stuff out,” he said to his students. “It’s Linux.” One of the students was instantly curious, even though he knew nothing about the operating system that had recently been launched. He took the shoebox home and, for the remainder of his university years, stayed up nights experimenting with the disks on his desktop computer.
In a curious twist of events, that same student, Ibrahim Haddad (BS ’96; MS ’98), would go on to become the vice president of Strategic Programs for Artificial Intelligence (AI) at the Linux Foundation, after spending a career with tech and telecom giants such as Open Source Development Labs, Motorola, Palm, Hewlett-Packard and Samsung Research. He landed his first formal job when Ericsson Research headhunted him while he was still a PhD student at Concordia University. In 2023, he was listed among Concordia’s 50 under 50 Shaping Tomorrow.
Today, Dr. Haddad is a devoted advocate for making AI accessible through open-source licenses. Based out of Lebanon, within a stone’s throw of his alma mater’s Byblos campus, he works across time zones on collaborative projects with industry leaders on scaling open-source technology and building the platforms that have become integral to our daily lives.
In this interview, Dr. Haddad provides an insider’s view of the rapid transformations in tech and how he managed to keep up with the pace of innovation, and shares timeless advice and inspiration for innovators across the board.
What first drew you to open-source technology and how did you build up your career in tech?
As soon as I finished my master’s at LAU, I went to Canada to pursue my PhD at Concordia University. A year later, my soon-to-be manager came across a paper I had published as part of my research, looked me up in the Yellow Pages, and offered me a job at Ericsson Research.
What motivated me to accept the job came down to one question I’ve wanted to answer: can we investigate whether we can collaborate, co-develop, co-innovate, and deploy open-source software as a potential replacement for what we were using at the time in the telecom infrastructure? Throughout my career, the solutions to many challenges across several industries have always circled back to open-source software and this question.
What is open source and how does it impact our daily lives?
Open source is a model in which the source code, program or software is developed collaboratively and licensed under an open-source license, which allows people to see, use, modify or update the code to their liking. While open source started in the early 1990s in hobbyists’ circles, by the 2000s many companies started to get involved.
Today, pretty much everything we touch is powered by open-source software and Linux (the operating system), from cars and smart TVs to traffic control, the internet, aviation and banking systems. Open source became a de facto way to develop software because it allows companies to co-develop enabling technologies that would otherwise need too much time and resources to be developed in silos.
While I was VP of Research and Development at Samsung Research in 2018, I reached out to friends at the Linux Foundation to run a market research report on open-source AI to determine where we were in terms of technology, fragmentation, and who’s doing what in the open-source space. Our report examined the top 30 open-source AI projects.
Today, as part of my work, I track the top 400 projects that are critical to the open-source AI ecosystem. Every week, these projects create one million new deployment-ready lines of code. So, no company can outpace this level of innovation on its own. There are thousands of companies and hundreds of thousands of developers involved in contributing to these projects, including some that were initiated as university projects.
Can AI be open source the same way software applications are, and how is it regulated?
AI is mostly open source in terms of software building blocks. For example, all the companies that are developing Large Language Models (LLMs) are powered by PyTorch, a machine learning library, which is a project that we host at the Linux Foundation. In lay terms, this project is what the Linux operating system is for the open source ecosystem. We provide these building blocks that help people benefit from the collective innovation and create their own differentiation and innovation on top.
Many AI companies are trying their best to inform regulators on how coding works. While the first batch of regulations was subpar due to poorly informed regulators, today, the AI Act in the US looks at AI from a risk-based tiered system. Of course, many challenges with respect to regulations persist, and that is why it should be viewed as evolving, and not a destination.
How can we benefit from this AI revolution, what jobs are at risk and what are AI margins of error?
In my opinion, over the next few years, AI will eliminate the need for certain transactional types of jobs that do not require critical thinking or empathy, for instance.
I see AI filling an assistant’s role. It gives you different capabilities to extend yours, allowing you to do your tasks better and more efficiently.
We are already seeing more jobs created by AI, such as prompt engineers. Whether we are creating as many jobs as we are eliminating, though, remains to be seen. We are not yet at a critical adoption point to make that kind of prediction.
Margins of error, which we refer to as “hallucinations,” are very much tied to the kind of data that a user feeds into the model, as we are already seeing primarily in large-language models that are trained on data. One way to minimize hallucinations is to improve the quality and relevance of training datasets which will dictate the model’s behavior and the quality of its outputs. In 2017, a cover story in The Economist announced that data surpassed oil as the world’s most valuable resource. This makes sense because data is key to training the model, which, in turn, finds insights and draws the connections.
You have witnessed massive growth in the industry in your career. How did you adapt to accommodate these seismic shifts?
One key lesson has been not to get too comfortable in any one job. I think that in any industry, you might get to a point where you feel you are not learning anything new and take it as a sign that it is time for a change. Having to move a lot was very taxing for me and my family, but it was necessary. First, I realized that I had to stay relevant to whatever advances were taking shape in the industry, and second, I wanted to work at one of the companies that were at the edge of innovations. I feel that I made the right call at the right time to accept or reject a job.
How would you advise current students and young graduates to prepare for a career in AI?
Be curious. As a student, I simply wanted to learn how things work and make use of my desktop computer and the Linux distribution that my professor gave me.
It was that curiosity and the drive to learn that shaped my career.
For university students who are curious about AI, there are thousands of opportunities out there and the learning landscape is a lot better than what it used to be 20 or 30 years ago with access to platforms like YouTube and GitHub for collaborative development, as well as free classes and training. All you need is that curiosity and determination to learn and explore.
This interview has been edited and condensed for the sake of clarity.