Interview

AITech Interview with Hussein Hallak, Co-founder of Momentable

Explore strategies for balancing AI innovation with regulatory control amidst rapid technological advancements

Hello Hussein, can you share with us your professional journey and how you became involved in the field of AI and technology, leading to your role as co-founder of Momentable?

I’ve always been fascinated with technology and sci-fi. AI is one of those things that sticks in your mind, and you can’t help but think about it. 

I studied engineering and worked in tech, and even with all the advancements in technology we have been witnessing in the past two decades, AI was one of those things that we always thought would remain a sci-fi pipe dream for a long time. 

This is not because there was nothing happening. But those who work in tech know it takes a while for the evolution of these technologies, and advancements are usually several degrees of separation from the regular user. 

I’m always learning, reading, and building tech products, so AI was a field of study; however, implementing it was never accessible for early-stage products. 

The status of AI has forever changed. OpenAI’s ChatGPT launch has had a remarkable impact on the field of AI and tech in general. AI is now available for regular users. People like me working in tech can now use AI in everything they are doing, which will accelerate product development and will impact the kind of products we can build and deliver to customers.

In addressing concerns surrounding AI ethics, you mentioned the importance of regulatory measures, technological transparency, and societal readiness. Could you elaborate on how Momentable approaches these areas to mitigate potential ethical dilemmas?

With great power comes great responsibility. AI is a powerful technology, and it’s very easy for those wielding it to amplify the impact of the good and the bad in the work they do. 

While we, in the tech space, are doing our very best to build great products that deliver great value, we are not social scientists, psychologists, or public servants. So, we can’t be expected to regulate and supervise ourselves, nor can we evaluate the impact of these technologies and the products using them on the individual and on society. 

It’s great when companies have values, codes of ethics, missions, and visions; however, those are not enough. Just like we do not rely on drivers to drive safely, we have traffic laws, signs, lights, and we make sure people driving a car are licensed and trained. We need to do the same with technologies, which, I would argue, have a massive impact on shaping our future as a species more than anything we’ve ever had in our history. 

At Momentable, we are acutely aware of the impact of generative AI on our stakeholders, artists, cultural organizations, and art lovers. We engaged our stakeholders, ran several experiments where generative AI created artworks with input from artists, with their permission and consent. 

In addition to using AI to enhance customer experience on our platform, we are using the learnings to evolve our product and introduce Generative AI in a thoughtful way that adds value and advances the art and culture space.

How do you personally strategize and prioritize addressing the ethical implications of AI within Momentable’s projects and initiatives?

We start by listening to our stakeholders; artists, art lovers, clients, and team. From simple Slack messages, to meetings with artists who are friends of Momentable, to talking to the experts, and sharing YouTube videos of leading content creators in the art space. 

By taking in the input, feedback, concerns, and advice, we make sure we are thoughtful about the next steps we plan to take. In addition to the data and numbers we get from market reports, we use the qualitative input we gather to help us focus on where we can add significance. 

We understand the AI conversation is ongoing, and as the industry keeps moving at rapid speed, we must stay engaged, always learning, and maintain an open attitude.

As someone deeply involved in the AI industry, what advice would you offer to our readers who are concerned about the ethical use and bias in AI technology?

 Ethical use and bias are not new to tech; it’s further amplified in AI, particularly generative AI. Three core reasons lead to challenges in ethical use and bias in generative AI: 

  1. Products are developed by the tech sector, which deals with many ethical challenges and major bias due to the lack of diversity. These challenges are amplified by keeping the technology and products developed closed, which. 
  2. The data used to train and develop the AI models also has many issues with how it was sourced, used, and also carries within it implicit bias. These issues are amplified even further since many AI models keep their. 
  3. The nature of generative AI severely exacerbates these issues and challenges. By producing content mimicking the training data using code developed by a sector dealing with ethical challenges and bias, generative AI is adding to the problem with every answer it provides. 

Your ability to influence or mitigate the ethical use and bias in AI depends on where you are in the systemic hierarchy of the tech ecosystem. As a product builder and customer, there is very little you can do to change things.

The sector requires regulatory and systemic intervention. But it can’t be done without engaging with the stakeholders and having them at the table.

This is not to say that as a consumer you do not have any power; you do. You can make your voice heard through social media, customer feedback, calling your representatives, and voting.

I encourage you to learn and gain some hands-on experience to develop your understanding and appreciation for the technology and how powerful it is.

In your view, what role do education and skill development play in preparing society for the impact of AI, particularly in addressing job displacement and socio-economic challenges?

As technology continues to evolve and take over more of our roles at work as we know it today, the transformation will have massive implications on our lifestyles, how we do things, and even how we define ourselves and the value we assign to our roles.

We need to stop thinking about education as a precursor to job placement. This limited view meant that education is always lagging behind the needs of the economy and helplessly lacking in addressing any of the needs of our society.

Education must focus on the future beyond the jobs of today or tomorrow. It must graduate innovators and value creators. Education must focus on graduating creatives skilled at solving the problems we face 50-100 years from now.

To create a better world, schools and universities must become open spaces for research and discovery, where art, technology, and culture collide and fuse to inspire new thought forms.

Could you share some examples of how Momentable ensures transparency in its AI technologies, particularly regarding decision-making processes and algorithms?

We do everything in collaboration and coordination with our key stakeholders. This gives us a baseline to measure against.

It’s easy to be influenced by what we read and watch and think it’s an accurate representation of the world. To avoid the pitfalls of building on the learnings and understanding within our own bubble, we always start by expanding our perspective. Put simply, we talk to people.

It’s slow, inefficient, and important. If we are going to use technology to impact people’s lives, we better speak to those people, learn from them, understand their perspective, and take into consideration what matters to them.

This approach led us to experimenting with AI without limitations at the very beginning. We shared our results with our community: our users, partners, artists, and our advisors.

We wrote about our process and shared it through workshops and webinars, and we took on all the feedback we could gather.

While the inclination at the beginning was to keep things close to the chest, this open and transparent approach helped us focus on the areas where AI can add the most value in our work.

In the case of Momentable, we use AI to help us deliver the best user experience and make it easier, faster, and better for our users to use Momentable and capitalize on the democratized access to the largest collection of great art in the world.

Considering the rapid advancements in AI, how do you navigate the balance between innovation and the need for regulatory control within Momentable’s operations?

Until a clear regulatory framework is developed and introduced, like most companies, we continue to operate within the regulatory frameworks for the tech sector and business in North America and Europe.

At Momentable, we are governed by our internal ethical code and guided by our strong sense of mission to bring the best visual experience to customers through innovative software, personalization, and immersive storytelling.

With our stakeholders being engaged and involved throughout the process, we make sure we create a space for creativity and innovation with boundaries that keep our work focused on adding value with minimal negative impact on our stakeholders.

What steps do you believe are necessary for governments and regulatory bodies to effectively oversee AI development and ensure alignment with ethical and safety standards?

Bring all the stakeholders, industry players, academia, builders, users, communities, regulators, and the public to the table to collaborate and constructively build for the benefit of all.

Form a steering board and create a framework for engagement so that adding value to all stakeholders is a main condition.

Be clear and transparent about the objectives and outcomes you are after.

Develop a roadmap with realistic short-term goals and objectives, in addition to highlighting the mid-term and long-term areas of focus.

Maintain connection with stakeholders through regular roundtable meetings. Share regularly, and invite input, feedback, and criticism.

Keep moving forward and getting things done.

From your perspective, what are the most pressing ethical dilemmas or challenges currently facing the AI industry, and how can businesses and individuals contribute to addressing them?

The most pressing ethical dilemmas or challenges currently facing the AI industry can be viewed from three perspectives: long-term, mid-term, and short-term.

Long-term: AI is going to play a significant role in shaping who we are as a species and how we live our lives. Just like there are generations today that do not know a world without smartphones and the internet, we will have generations who do not know a world before AI, and we will have a generational gap and challenges that arise from this gap. Older generations will feel left behind, while new generations will be heavily dependent on AI and AI-enabled devices. The energy consumption will be extreme, and errors caused by AI will have massive ramifications, especially since AI will be embedded in essential services, infrastructure, and defense. In many ways, some might say we will be at the mercy of AI, and even if AI doesn’t become aware or evil, mistakes AI makes are possibly disastrous.

Mid-term: AI will cause massive socio-economic shifts that require offering support and help to those individuals and businesses impacted until the transformation is complete. Changes to the education sector are inevitable, and the evolution of our economy will have positive and negative implications that must be observed and prepared for. Focusing on the energy sector, making sure equitable, democratized, and open access to AI tools and training is crucial. New incubators, accelerators, resources, and support services must be made available to help manage the shift and protect society and the economy from the negative implications. As more people become proficient in using AI tools, they will be able to build massive businesses that compete with existing businesses, and just like smaller teams were able to disrupt businesses with software, now individuals can disrupt businesses with a few tools. Not to mention the malicious use of these tools can lead to even more challenges and threats.

Short-term: The immediate priority lies in creating spaces for engagement, learning, and hands-on experience with AI. It’s crucial to create an environment where individuals and businesses can understand, interact with, and ethically utilize AI technologies. This involves opening dialogues, providing educational resources, and encouraging ethical AI use through policy advocacy and community involvement. Businesses can lead by example, ensuring their AI applications adhere to ethical standards and are transparent in their operations, and share their learnings and discoveries. By actively participating in these efforts, we can navigate the complex and ever-changing terrain brought forth with the advancements in AI.

Finally, what are your thoughts on the future of AI and its potential to positively impact society, and do you have any closing remarks or key insights you’d like to share with our audience?

The future of AI holds remarkable potential for bettering every part of our lives. This technological evolution will accelerate advancements and enable breakthroughs in healthcare, climate science, education, and the sustainability of our species.

This optimistic vision is dependent on democratizing access, sharing openly, and ensuring there is transparency in how AI models work.

In addition, we must have an unwavering commitment to ethical principles, inclusivity, and equitable access to AI technology, prioritizing creating and delivering value to ensure all technological advancement, including AI, is a catalyst for positive change.

I invite you, the reader, to think of yourself as an active participant in this future being shaped today. Do not be a spectator; instead, take part, engage with AI, learn, build, and innovate. 

Now more than ever, the barriers to entry are minimal, and you can make an impact with less time, money, and resources. Embrace your roles as a shaper of the future, and engage with the world being created in front of our eyes with your thoughts, words, and actions for the greater good.

Hussein Hallak

Co-founder of Momentable

Hussein Hallak is the Founder and CEO of Next Decentrum, the launchpad for the world’s most iconic NFT products.  Heavily experienced in the art and technology fields, his recent roles include General Manager of Launch, one of North America’s top tech hubs and startup incubators, where he helped over 6500+ founders and 500+ startups raise over $1 billion. In 2019, Hussein joined 3 tier logic as VP of Products & Strategy and worked with some of the world’s most valuable brands including Universal Studios, P&G, and Kimberly Clark.

Hussein writes and speaks about startups, blockchain, and NFTs, and advises several blockchain and tech startups including Ami Pro, Gigr, Mobile Art School, Fintrux, Majik Bus, Traction Health, Cloud Nine, and Peace Geeks.  He was recognized in 2019 as one of 30 Vancouver tech thought-leaders and influencers to follow and has been featured in Forbes, BBC, BetaKit, Entrepreneur, DailyHive, Notable, and CBC.  When not building products, he enjoys writing, reading, and engaging in meaningful conversations over good coffee, and his favorite pastimes include playing chess with his kids, binging on good drama and science fiction, drawing, and learning new guitar licks, sometimes all at the same time.

Related posts

AITech Interview with Bernard Marr, CEO and Founder of Bernard Marr & Co.

AI TechPark

AITech Interview with Morgan Slade, CEO at CloudQuant

AI TechPark

Interview with CEO Rana Gujral of Behavioral Signals Emotion AI Tool

AI TechPark