SEBI Asks Investment Advisers to 'Warn and Inform Users About Using AI: Details
The Securities and Exchange Board of India (SEBI) has issued a stern warning to investment advisers about the potential risks associated with the use of artificial intelligence (AI) tools in their operations. The regulatory body has emphasized the importance of transparent disclosure and investor protection. Check out all details here.
Trivesh D, COO of stock trading platform Tradejini, told PTI that "the possibility of unintentional data exposure highlights the need for strong security measures and clear disclosure to clients about the extent of AI tool usage."
In its consultation paper earlier this month, the Securities and Exchange Board of India (Sebi) emphasised the increasing use of AI techniques in Research Analyst (RA) and Investment Adviser (IA) services.
Many AI technologies, including Google's Gemini and OpenAI's ChatGPT, are now accessible as chatbots thanks to technology developments and improvements.
AI-powered technologies enable human-like communication and human-like response.
These technologies support a number of tasks, including data analysis and summarisation, and they may enhance productivity and efficiency.
In a consultation document published last week, Sebi stated, "These AI tools, however, may not adequately safeguard sensitive data shared during conversations, potentially leading to unintended data exposure and concerns related to data security."
"While embracing this innovation, we must be mindful of its implications and responsibilities," stated Feroze Azeez, Deputy CEO of Anand Rathi Wealth Ltd. IAs offer customised services based on risk assessment and suitability in accordance with client-specific needs. Similar to this, research assistants (RAs) offer recommendations based on predetermined guidelines and methodology that they have adopted. They also have to maintain documentation of the research report, recommendations, and justification for those recommendations.
While AI technologies can be a big help to RAs and IAs in their work, they might not always produce the kind of insightful results that are expected given their comprehension of intricate security-specific or client-specific circumstances or demands, such as ambitions or personal/financial circumstances, according to Sebi.
Furthermore, not all the information needed to develop an output or recommendation may always be available through such technologies. It further stated that AI technologies might not reveal, for instance, whether risk profiling and appropriateness requirements have been met by IA.