Artificial Intelligence (AI) is holding much of the headlines today when it comes to business development. Its possibilities for learning and talent development are enormous as well. AI is already present in our lives, helping us find information, make healthy choices (and often holding us accountable for said choices), and personalize almost every aspect of our existence.
Yet with all that is awesomely positive about AI taking over our lives, there is also the need to regulate it and make sure that certain boundaries are not being crossed in the name of progress or science.
Perhaps the most comprehensive list of principles that need to guide the adoption of AI has been drawn by the Future of Life Institute, a charity and outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity. Their set of twenty-three principles, called the Asilomar AI Principles are signed by thousands of scientists and others, including Stephen Hawking, Dennis Hassabis, and Elon Musk.
Governments are also taking action. On May 22, 2019, the European Union, the US, Canada, and other countries adopted a Digital Economic Policy that includes five principles for ethical development and the use of AI. The World Economic Forum came up with nine such principles, and so did the European Civil Law Rules in Robotics. In addition, the Carnegie Endowment has published its own eight guidelines for AI research and development.
Talent evaluation is sensitive
Talent Development is a very important, yet somewhat a sensitive topic in organizations. Its main function is to ensure that people with potential are recognized and offered the right tools and materials to reach that potential, achieving some organizational goals as a result of it.
While the idea of rewarding and promoting employees based on both performance and plausible future trajectory is commendable, there are many variables. In addition, sometimes programs that are supposed to incentivize employees have the opposite result.
I participated in an event hosted by a multinational company where they were trying to establish a Guinness record for a story written by the largest number of individuals at the same time. The people writing the story were the company’s top talents from each department. So far, so good. But the teams supporting these individuals were made up of those who did not score as high on the talent grid. And to add insult to injury, the writers wore white shirts while the others had black ones.
A principle of fairness
What I’ve just described above was a bad call made by the HR team. Celebrating achievement is one thing; pointing a finger (in the form of a dark-colored shirt) in the direction of employees who may have done their jobs exemplary but didn’t score well for talent accounts for poor people skills.
If that happened when there was a whole group of people involved, what’s to say it can’t be the same if there’s an AI algorithm calling the shots?
When it comes to establishing what grid would be used for measuring talent, TD specialists need to look at how bias can find its way into the AI algorithm and make sure there are elements to guard against it. HR specialists have to be involved in the design, deployment, and evaluation phases to properly advise towards the necessary adjustments.
Diversity and inclusion need to be in focus
I’ve recently written about the importance of having instructional designers work with neurodiverse learners in mind. “Diverse” is not a synonym of “impaired,” and these individuals can prove to be highly valuable to their teams and the organization due to their unique perspective or way of interpretation and action.
AI presents immense possibilities, but it can only be truly innovative if there is a representation of different groups within its algorithms. Genuine diversity goes beyond geographical spaces and cultural variations. AI’s logic is constructed by people and companies who truly want to be inclusive and encouraging of diversity. They need to bring in these types of people in the creation process and allow them to make their mark.
If a homogenous crowd is responsible for the entire algorithm, bias will be a given, and all efforts towards diverse talent development will fail.
AI is definitely growing and will certainly become the main way employees will access information and development opportunities. It’s therefore paramount to make sure from the start that these technologies will work for and be beneficial to everyone regardless of demographic, gender, or cognitive differences. Genuinely good AI systems must be mindful of the needs, values, background, and cognitive patterns of every individual accessing them.
Raluca Cristescu is a Faculty of Letters graduate with over ten years of experience in corporate training, focused mainly on soft skills for customer service and direct sales.