WEBINAR  





Creating a Culture of AI for Canadian Pension Funds  

Resource Information:

Slide Presentation

AI Governance Charter Template

AI Policy


Contact Information

Akio Tagawa 
President & CEO
LINEA SOLUTIONS

[email protected]

For questions on CE/CPD, please contact: [email protected]

Responses to outstanding questions from the webinar


What metrics or KPIs do you track to (help) measure the success of the Al implementation?

We have not gotten yet to the point of being able to establish good KPIs or metrics in our company. One limiting factor is that the main AI tool companies implemented their tools a bit prematurely from the standpoint of functionality, so most do not have good enterprise resource management accounts. This means that we are even having difficulties tracking who is using the software that is provided them, so the best we have been able to do is to survey the staff and trust that they are telling us the truth. At the moment, the primary metrics we are looking at are related to whether our staff are using the tools or not, and we hope to get to the point of better understanding how they are using the tools. Additionally, with use cases, we are tracking the number of use cases we have, and the number that get approved. At the moment, we do not have a variable to determine at what point a use case becomes viable for us, but instead, we tend to treat it as a yes/no situation, where excessive hallucinations of the AI tool will negate its viability.

 

What are the controls necessary with Generative Al that are needed to protect internal and confidential information from getting into the public domain.

This has not been as big of a pain point for us as the media has made it seem. As a consulting firm, we work with pension plans that work with member data. However, we have data security policies in place that generally prevent our staff from even accessing these data sets on their own devices. So this has been a policy that resides outside of, and precedes the use of AI tools, and is not just about AI usage. Secondly, with the exception of OpenAI’s ChatGPT Plus, the tools we have been using do not store our information that we are using in prompts. With ChatGPT, we have a policy in which staff are asked to generally turn history off so that data are not kept; if they must maintain history, they are required to scrub any confidential, proprietary or client identifying information prior to prompt submission.

 

Do you foresee a new role in organizations that specifically manages AI Risk?

This is a question that will see a lot of differing opinions. Our take on this is that because of the large scale, the universality and the specific expertise needed in AI, we think that organizations will end up needing to create a new role that manages AI risk specifically. Risk Management or Information Security will certainly have a role, and there will need to be a form of collaboration and good communication among those departments as well. It’s likely in that instance that the “head of AI risk” would have a dotted-line relationship to the “head of risk”, but may report directly to the “head of info security”. Please keep in mind that this is one opinion, and this perspective will be contested for years to come.

 

Try another language? Here's a question in Spanish: A que edad me puedo retirar?

We are sorry we did not have time to test this question. I do believe it would have succeeded in answering the question. ChatGPT is capable of translation in over 95 languages including Spanish. It was very interesting for us to see the training data in English processed by the Chatbot tool, followed by questions and responses in French. 

 

A large problem in DB administration is unlocatable plan members. Can Al play a role in locating members?

This is a fascinating use case. This is a likely scenario where AI could help; however, it’s questionable whether the tools commercially available would be able to address all of the requirements (ranging from the basic functionality to addressing all of the member data security risks) today. The rate of change/improvement is tremendous, though, and we think this type of use case could be addressed as soon as nine months from now. One example of AI impacting this is AI assisting in the development locating lost deferred members, proof of life/death match likelihood algorithms that the vendors (E.g., PBI, Berwyn, Pentor Charity Services) use to confirm if a member is alive or may have passed away. It could be a tool purpose built for this to find new patterns that may not be detected in the data today.