Skip to main content

Democratizing AI

[PLACEHOLDER]
Democratizing AI is all about empowering others to use it, by making it available to them. Audiences, such as marketers in a company, will be able to access AI capabilities as part of their MarTech solutions, without the need of being technical. It could also be schools, where the younger generations are learning how to use it in responsible, secure, innovative, and creative ways.

This is the year where companies, after discovery phases and teams experimenting, are looking to activate and take advantage of the AI advances.

endless-library-robot-librarian-with-people
Generated with Microsoft Designer
 

And so, questions emerge, such as “What to democratize when leveraging AI?” There are common scenarios, as well as specific ones, that will depend on the company, and the industry they belong to.

A common scenario, seen in many industries, when democratizing data is the data visualization and reporting. In digital marketing, as an example, data scientists and data analysts can automate reporting, making them available to the client. This is a great enabler as business stakeholders are empowered with easy access and self-serving capabilities. Setting the business in this path of insights and data literacy will evolve to new opportunities and use cases. One of those use cases can be predictive analytics, where data leads can leverage past consumer behaviour, at top funnel (such as bounce rates) or mid to lower funnel (when enriching CRM with web analytics), with the purpose of predicting actions for goals such as new leads or reducing churn. The learnings can be leveraged for purposes such as content strategy, e-commerce, others.

A more specific scenario can be in healthcare. AI can be leveraged first to process diverse information, usually unstructured, from medical historical sets, lab results, data sources of a variety of medical conditions; and secondly, to extract insights from it. This with the goal (one of many) in mind of early diagnosis. Democratizing the use of this technology can help with reducing administration complexities by digitizing electronic health records, and therefore freeing time from those in healthcare.

infinite-library-robot-librarian

A bit more specific scenario can be in healthcare. AI can be leveraged first to process diverse information, usually unstructured, from medical historical sets, lab results, data sources of a variety of medical conditions; and secondly, to extract insights from them.

Image via Microsoft Designer and modified by adding the people in the hallway with Adobe Firefly

Another candidate for democratization is storage and computing. You can leverage cloud providers when building and deploying models. With cloud providers, such as Microsoft and Amazon AWS you can access high-end GPU computing power with immediate scalability and substantial cost savings, and within a few clicks, instead of setting your own on-prem data centre.

This democratization can also bring new challenges. As capabilities become available, we are asked to keep building more new features, and AI-powered automation solutions. Therefore, there are risks that AI implementations can surface without the appropriate guardrails. Here are some of those risks:

  • Teams without AI literacy.
  • Bias. The use of the wrong training data, incorrect features, and/or selecting the incorrect models for the tasks, in addition to biases from the designers (data scientists and engineers), are all parts of bringing bias to your AI models.  
  • Hallucinations. When LLMs provide nonsensical and/or inaccurate outputs.
  • Deepfakes. Leveraging AI for media manipulation, where images, voices, videos, text are altered (or fully generated). The misuse of this technique is a risk for the government, companies and individuals as it could be used to cause harm, such as stealing sensitive and personal information, manipulate and distort context creating fake information, bully, fraud, etc. 

Having said, we have a good chance to overcome them by applying frameworks and good practices as part of our operations. Here are elements to consider within such a framework:

  • Governance.
    •  Determine what you are democratizing. This includes internally, between the company’s departments, as well as with external partners.  As you are defining your ecosystem and pipelines, you can establish the areas to distribute between teams based on their expertise, knowledge and the guardrails: data ingestion, models and features, etc.
    • Ownership and control of data that is used to feed the AI and ML
    • Intellectual property (IP) is relevant. This defines not only what technology you will be using, but also if the company is okay to use, as an example, cloud platforms for image and/or audio processing without being worried of confidentiality being breached, and potentially having their “secret sauce” outside of the walls of the organization.
    • Frameworks that dictate the type of audiences (users) that will be accessing the systems based on the use cases. For example:
      • Those accessing data visualization and reports,
      • Those working on model development and fine-tuning
  • MLOps. Having the right processes and the tools (e.g. cloud services and CI/CD pipeline) for consistently delivering, responsibly, AI solutions.  MLOps will bring to your operations:
    •  Deployment pipeline
    • Monitoring
    • Automation for data preprocessing and elements related to the model training process
    • QA automation, facilitating the areas of testing and validation
    • Potential cost savings related to data storage and efficiencies on cloud services
    • Allowing many (Marketers, developers, QA engineers, data engineers, data scientists, others) in the end-to-end AI activation in the organization to access the services provide
  • Training and knowledge sharing. The training should be at every level, and promoting AI solution thinking, agility and experimentation within the organization’s culture. This will accelerate achieving your OKRs


As technology progresses, we continue to see providers, such as Amazon AWS, Microsoft Azure, and many others, allowing for the AI technologies to be available to others, allowing many of their customers to use their cloud services to experiment, and determine the value it can unlock. After all, 2024 is the year for companies delivering on the value, the ROI and pushing for the adoption of AI tools.


Benefits to the organization

Cost efficiency. 

Teams can leverage publicly available data sets, algorithms and models, allowing them to experiment and introduce capabilities and AI solutions into the company without a big investment.

Mitigate team barriers and skill gaps.

Today, learning AI does not demand a big investment due to the community behind it and AI models available on the cloud. It has also become accessible to everyone, and not only to data scientists and developers. The highly technical teams will be building and training the models; while Gen AI talent (power and casual users well-versed in Gen AI solutions) will use the technology regularly, bringing a lot of efficiencies in their daily work, and therefore becoming valuable to the organization.

A company can also reconsider the geo distribution of their talent, now more than ever, when hybrid and remote working has become the reality for almost all the disciplines within the organization. Companies, with little investment can speed-up the learning, and usage, of AI solutions to their individual contributors across teams, and across geo (including remote talent) a reality, providing a well-balanced and exciting workplace. This pushes for reorganizing from functional silos to integrated, cross-functional teams aligned to products or platforms. This can increase employee satisfaction, reduce time to talent development, boost employee engagement and accelerate the on-boarding process.

Accelerates innovation.

Innovation, generally speaking, starts with research. AI provides a lot of benefits as it brings automation as an enabler. An example can be if a team uploads a set of white papers and other type of documents regarding a certain topic (like the impact behind polluting our rivers) to a model, then you can use it for synthesizing, summarizing and enabling search to facilitate the work of the researchers to bring new ideas to solve the problem at stake.

Prototyping is another great enabler provided by AI. Experimentation and a commitment to evolving ideas are paramount and supports the drive for new sources of growth.

Trending posts

Productivity framework for 2024

Recently I was at a Christmas party and I found myself giving advice to a friend on being more productive. I shared the approaches that I take which helped me become more productive at work and in my personal life. The conversation with my friend inspired me to share my approaches in this blog .  Photo by Moose Photos from Pexels   My productivity framework has five key pillars and to remember them I use the mnemonic, POFOR = P lan your tasks, O rganize yourself, F ocus on your tasks, O ptimize yourself with habits and R eflect to ensure you are being productive on the right tasks. Plan Planning is very crucial as it sets the tone for the rest of the pillars. I always found I was more productive when I planned my tasks compared to when I didn’t, and hence planning has become my rule of thumb. I recommend taking 30 minutes at the end of each day to plan your next day. This means prioritizing your tasks and blocking your calendar accordingly. By not doing so, you are at risk o...

Small Language Models

 Open source models will continue to grow in popularity. Small Language Models (SLMs) are smaller, faster to train with less compute.  They can be used for tackling specific cases while being at a lower cost.  Photo by Tobias Bjørkli via Pexels  SLMs can be more efficient SLMs are faster in inference speed, and they also require less memory and storage.    SLMs and cost Small Language models can run on less powerful machines, making them more affordable. This could be ideal for experimentation, startups and/or small size companies. Here is a short list Tiny Llama. The 1.1B parameters AI Model, trained on 3T Tokens. Microsoft’s Phi-2. The 2.7B parameters, trained on 1.4T tokens. Gemini Nano.  The 6B parameters. Deepseek Coder

Key insights from "Atomic Habits" by James clear

I recently finished reading "Atomic Habits" by James Clear. The book was incredibly insightful. If you are looking to improve your habits, and achieve results while you are at it, then this book is for you. It may help you form new habits, and break bad one. Without further due, here are my top three takeaways. Photo by Nataliya Vaitkevich via Pexel, adapted by Beolle Takeaway 1:  The habit-forming loop: James outlines that the habit-forming loop consists of four stages Cue . The cue triggers the brain to expect a reward and is crucial for building automatic habits. It is typically associated with time, place, or feeling. For example, feeling bored could be a cue to the habit of using social media. Craving . This is the urge resulting from the cue. Using the above example, opening the social media app is the craving initiated by the cue of boredom. Response . An example of a response is the action of opening the social media app and using it. Reward . An example of reward i...

This blog uses cookies to improve your browsing experience. Simple analytics might be in place for pageviews purposes. They are harmless and never personally identify you.

Agreed