In this episode of Team Anywhere, we discuss the use of AI to strengthen productivity and engagement in the virtual and hybrid world. Yoav highlights the importance of human-centric leadership when incorporating new technologies, emphasizing the need to empower human potential rather than replace it.
We dive into the ethical dilemmas arising from advanced AI tools, such as responsible innovation, and the risks associated with AI, including accuracy, bias, privacy, sustainability, and societal disruption. Tune in to discover how to navigate the ever-evolving landscape of AI with a human-centric approach as we Team Anywhere.
Ethical AI Practice and Its Evolution
From the recognition of Wikipedia as a reliable source to the expected shift toward using AI tools for efficiency, Yoav delves into the diminishing ethical concerns surrounding AI. Since its launch in November, Generative AI Chat GPT rapidly scaled, and it brings concerns of disruption like how we work, and the ways we work. Through his role, Yoav emphasizes the importance of responsible technology development and the need to augment human potential rather than replace it. Leaders should remember that the general awareness about security and privacy has not yet reached the level of technological literacy needed to use these systems.
Generative AI, Workplace Trust and the Value of Human Interaction
As technology advances, Humans grapple with the ethical gray areas created by new tools. Leaders are now concerned if the work being submitted to them is original work from their team member or if it is from Chat GPT? This could create a lack of trust in the hybrid workplace. Additionally, Yoav cautions employees to not submit company confidential information like strategic plans, or company proprietary information to systems like Chat GPT. Technological literacy is important and leaders who incorporate new technologies into work, also need to teach the necessary ethical boundaries.
AI Disruption and Mitigating Risks
Drawing attention to the disruptive power of AI, Yoav highlights the risks associated with this technology. The four main risk vectors discussed are (1) Accuracy – how do we deliver results that are truthful and do not contain hallucinations? (2) Bias and toxicity -the training data in these underlying systems are biased, how do we not perpetuate these biases, (3) Privacy and security, – people want to have control over their data, and expect that they have security in these systems. (3) Sustainability – minimizing the environmental harm of training and developing these systems and (4) Societal disruption – how is this going to change us as a society? With these risks in mind, we need to develop guidelines and guidance to ensure accurate, safe, honest, empowering, inclusive, and sustainable AI development. By maximizing benefits and minimizing risks, we can navigate the AI revolution responsibly.
Who is Yoav Schlesinger?
Yoav is Architect of Ethical AI Practice at Salesforce, where he helps the company embed and instantiate ethical product practices to maximize the societal benefits of AI. Before joining Salesforce, Yoav was a founding member of the Tech and Society Solutions Lab at Omidyar Network, where he launched the Responsible Computer Science Challenge and helped develop EthicalOS, a risk mitigation toolkit for product managers. Before that, he brought to bear his Stanford undergraduate studies in Religious Studies, Political Science, and International Security in a more than 15-year career as a consultant, leader, and two-time founder of community, mission-driven, social impact organizations.
- LinkedIn: https://www.linkedin.com/in/yschlesinger/
- Website: https://www.salesforceairesearch.com/trusted-ai
How to Create A High-Performance Remote Team:
Improve Remote Team Work:
Improve Remote Leadership Skills: