Curated by THEOUTPOST
On Thu, 11 Jul, 4:05 PM UTC
4 Sources
[1]
Council Post: Financial Services And AI: Cautious Optimism Paves The Way For Thoughtful Adoption
Michael Boese is CEO of Hearsay Systems, a trusted global leader in digital client engagement for financial services. When it comes to new technology and financial services, caution rules the day thanks to the heavy set of regulations these firms and their personnel must follow. That's why when AI burst onto the scene, it seemed as though it might take years for financial services companies to feel comfortable integrating it into processes and functions. Curiously though, AI made its way into nearly every conversation I've had with customers and colleagues over the last year. The general theme has been that while AI is compelling, it also seems a little dangerous. Could a firm use it productively, reliably and safely without running afoul of regulations? AI has quickly become one of the most disruptive, transformative technologies of our lifetime. While it's undeniable that human intervention -- in the form of ethical analysis, strategic judgment and compliance oversight -- is essential for the ongoing evaluation of use cases, AI simply cannot be ignored by financial services firms. Today, organizations of all sizes are piloting use cases that make clear sense for their business, advisors and clients. Here are three ways firms are already using AI to their benefit. 1. Assisting With Content A Gartner report noted that content generation is the most prominent use case for GenAI. It has the potential to be an incredible resource for creativity and utility to advisors who want to reach clients and prospects through social channels. It's not always easy to develop compelling content, whether it be about services, corporate initiatives, investment education, or events and issues that impact communities. This is particularly true when you need to maintain a consistent schedule tailored to each specific channel. A thorough AI prompt can help generate copy consistent with corporate messaging that can be revised to fit the advisor's style and brand. This can then be sent for compliance approval prior to being published. That layer of compliance can help reduce the risk associated with AI-generated content by bringing a human into the approval workflow. As individuals are consuming more social content across channels but choosing to engage with this content less, a human-in-the-loop is key. Hearsay's study of the social activities of more than 260,000 financial services professionals at 100 leading global financial services firms (and 13 million published posts) found that original content performed far better than unmodified content suggested by administrators, yielding 10 times better engagement. The problem is that only an estimated 4.4% of all published social content in 2023 was original. If advisors instead use AI as an idea generator for content that can be personalized by advisors, the resulting engagement could tick way up. This is a simple way to stand out to target audiences. 2. Seeking Approval On the administrative side, firms are now piloting AI to assist in compliance oversight. Compliance teams are overwhelmed as both regulations and social content increase. As such, content reviewers have unexpectedly reprioritized high-urgency items. This means a lot of submitted content that requires pre-approval sits in a backlog, going unprocessed, rendering many posts irrelevant by the time they are approved. One way firms are using AI to help solve this is by flagging potentially problematic videos before posting. AI can be trained to look for phrases or words that trigger compliance alarms. They are also increasingly able to analyze video for content that is not allowed. This helps reduce the workload on compliance team members and focuses attention in the right place, thereby speeding up approval processes. 3. Upleveling Service Although firms are still in the early days when it comes to using AI for stock picks and personalizing portfolios, this doesn't mean AI can't be useful for tailoring client service. With the right safeguards in place, AI could be deployed as an intelligent assistant that informs advisors of when and how they should follow up with clients and prospects, which channels people prefer or what they might be looking for. For instance, by analyzing social media data (such as posts with the most likes, comments or shares) integrated into customer relationship management (CRM) systems, next-best-action engines can help advisors gain insights into customer interests and detect signals like buying intent or moments when clients may need assistance. Additionally, AI can summarize an advisor's interactions with a customer across various channels, highlight key action items, analyze sentiment and even alert advisors to potential compliance risks. In these scenarios, AI technology could leverage and analyze insights from client data to help understand the moments in time when an advisor can play a pivotal role. Clients are busy living their lives after all, and this is why they have an advisor -- to put their interests first. AI can usher in a new elevated level of client service. These three use cases are among those being responsibly piloted by financial services organizations with appropriate compliance measures in place. Rather than increasing risk, AI can help mitigate it. Instead of violating regulations, AI can assist in compliance. It can also increase engagement and team efficiency. Contrary to the hesitation experienced in the past related to new technology adoption, financial services are no longer stuck on the sidelines. Instead, the industry is actively embracing it and realizing its many benefits, which is a truly exciting thing. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
[2]
Council Post: Why Fintech Needs To Think Beyond AI
Like every other aspect of the tech world, the fintech industry has a certain weakness for buzzwords, particularly when they make a basic tool or function sound more sophisticated and impressive than it actually is. I've noticed this is particularly true in the midst of the AI craze. The best way to tout a product or feature as particularly high-tech right now is to call it "AI-assisted" and "AI-powered," even if that claim is tenuous at best. And automation, for its own sake, can often be a waste of resources that misses the bigger picture. In running toward an "all AI, all the time" future, crucial parts of other high-tech strategies and solutions are falling by the wayside. Minimizing repetitive work for employees using AI is an important and even admirable goal, for instance, but sometimes the juice isn't worth the squeeze, so to speak. Real technological advancement isn't just about being the most out-of-the-box. It's increasingly about being the most customizable and combining the latest available tools to solve the same business problems that never left. In other words, the AI craze is threatening to cannibalize how we think about and even market products as high-tech. Not all high-tech products and solutions are AI-powered, and that's perfectly fine. To that end, not all AI-powered products are incredibly sophisticated tech either -- in fact, some of the most useful AI-powered or AI-assisted tools are the more mundane and less marketable ones, such as text prediction in email programs like Outlook or Gmail. This can lead to a little bit of running with scissors when it comes to development. Caution is sometimes lost in the race to implement experimental new tools that promise unlimited potential and often an unclear exchange of sensitive data. Again and again, machine learning models are improperly touted as AI, as though human-written algorithms aren't still the basis of any tool used to train the model. (Lest we forget, AI assistance and results must be generated by an autonomous digital system.) Overpromising remains a persistent problem, and not just among eager startups. Even major players like Amazon can fall into this trap, as evidenced by their recent response to reports that their Just Walk Out store was monitored by remote (human) workers and not just AI. But what makes a genuinely advanced technological tool, if not AI? And what does high-tech look like within the context of fintech? It needs to be at least several of these things: scalable, highly embeddable, super efficient, business-oriented and extremely secure. Something that can be AI-free but no less high-tech for it are application programming interfaces, or APIs for short. On the most basic level, an API is software that allows two programs to communicate with each other. In most cases, APIs function in a pool, overlapping and serving third-party systems in a streamlined way. A simple API can be just a connection method to exchange data between two systems, but a high-tech one is a game-changer in fintech (and elsewhere), capable of providing high-value interactions like embedded financing solutions. One API can provide a multitude of services with high-tech architecture and tools under the hood. Speed, security, ease of implementation and functionality are key elements of a high-tech API solution -- AI need not be included. I think we'll see real high-tech advancement in fintech when this industry starts seeing AI as a tool rather than a marketing bauble and a skeleton key. Next-gen APIs are already being created to suit AI engines, thus replacing human-written algorithmic approaches to combine data sources and improve exchange methods. And since generative AI can understand data without converting it to a predefined format, many basic APIs will become redundant. In banking on APIs and combining the high level of security promised by APIs with generative AI's potential, fintech has a shot at achieving its potential without losing the high-tech forest for the AI trees. Fintech's problems won't be fixed by AI alone. Instead, we need to go all-in on thoughtful AI integration with other tools that are scalable, highly embeddable, super efficient, business-oriented and extremely secure. In other words, everything is considered moderation, even AI. Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
[3]
Council Post: The Devil Is In The Details When Working With AI Products
Gaurav Aggarwal is Co-Founder of Truva and Forbes U30. Helping SaaS businesses improve customer adoption and reduce churn with AI. The digital marketplace is flooded with AI products (some of them built with generative AI), and new AI tools are announced every day. In fact, the global AI market size is predicted to reach $826 billion by 2030, up from $135 billion in 2024. This abundance will breed confusion. More importantly, if issues arise with your AI products, customers won't think for a moment before switching to the next product. Even before the rise of AI, Zendesk research found that "50% of customers will switch to a competitor after just one bad support experience." As a consequence, AI is becoming an insanely tight playing field. Meanwhile, what most people are doing wrong is believing in the illusion of LLM tools like ChatGPT or Gemini being the universal cure to their problems. Some companies assume that they will be able to completely outsource tasks and roles to large language models (LLMs). The promise of AI can only be fulfilled when we realize something important: the devil is in the details. This old proverb means that achieving meaningful results demands more than just a superficial understanding; it requires deep comprehension. Tools like Github Copilot can generate entire blocks of code and suggest solutions, which speeds up the coding process. A programmer tasked with creating a part of an application, for instance, can use Copilot to instantly generate the necessary structure and endpoint functions, providing a substantial head start. This seemingly miraculous ability to produce functional code at the click of a button presents an enticing proposition: reduced development time and increased productivity. Research found in 2023 that GenAI "can improve a worker's performance by as much as 40% compared with workers who don't use it." Or, consider a content writer working on a series of articles for a blog. ChatGPT can quickly provide outlines, generate introductory paragraphs and suggest topics, accelerating the initial stages of content creation. This efficiency allows writers to focus on refining their ideas and ensuring the quality of their work. At first glance, these tools seem to offer a better solution to complex tasks. The promise of AI tools like Copilot and ChatGPT is undeniably compelling: They appear to simplify workflows and enhance productivity significantly. However, this surface-level ease is deceptive. We humans are creatures of habit. Whether it's the same leg you habitually slip into your trousers first or the way you instinctively start combing your hair, routines are deeply ingrained in our behavior. Similarly, while creating a tech product, clean and consistent code minimizes distractions, maintains focus and allows developers to allocate more mental resources to problem-solving and creative thinking. These are crucial factors given that reading code consumes significantly more time than writing it: In fact, it is said that time spent reading code as opposed to writing it is "well over 10 to one." A quote often attributed to Donald Knuth puts this nicely: "Programs are meant to be read by humans and only incidentally for computers to execute." While Copilot can generate a lot of the chunk of code, it doesn't account for the specific requirements of the project. The developer still needs to ensure that the generated code adheres to the coding style of the project and integrates seamlessly with other components. It needs to be consistent. This involves a deep understanding of the project's architecture, the underlying business logic and edge cases that the AI generally seems to overlook. A lot of software engineers feel, in my experience, that coding is often the easiest part of the development process and these aspects, like architecture, business logic and design of the architecture, require far more understanding to develop. Similarly, ChatGPT's ability to generate text quickly doesn't eliminate the need for humans. The initial drafts it produces might be ideas glued together and well-structured, but they often require substantial editing to align with the tone and style of writing. All sorts of AI-generated content is flooding the internet but one intriguing ongoing conversation is how many readers can instantly recognize AI-generated content. Recently, for instance, Paul Graham pointed out the use of the word "delve" in a likely AI-generated content he received. Another alarming concern with LLMs is the factual accuracy of the content generated by these tools. They tend to hallucinate and generate their versions of facts. Historically, LLMs like ChatGPT have been called stochastic parrots among the scientific community. This means they are like parrots that throw out words due to mathematical approximations. This has led to recent expensive mishaps like that with the customer support chatbot of Air Canada creating its own version of facts and lying to a customer. This can be a very dangerous outcome. In essence, while AI tools provide valuable assistance, they are not a universal cure. They excel in handling repetitive tasks and generating initial drafts, but the true value of these tools can be truly realized when combined with a deeper level of human understanding and expertise. Most business leaders agree that AI is going to be critical to success in the upcoming years. Yet, the reality is that most companies are lagging. The key to avoiding the low success rate of AI projects lies in initiating those that focus on specific solutions that can fulfill business requirements. Successful AI implementation requires clear business objectives, high-quality data and collaboration between teams. Without these elements, the promise of an AI space incrementally getting better will remain unfulfilled. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
[4]
The Generative Generation - AI, chatbots and financial compliance
In March 2023, SEC Chairman Gary Gensler described Artificial Intelligence as "the most transformative technology of our time, on par with the internet and mass production of automobiles". When any groundbreaking tool arrives, a period of adaptation is required. This is more pronounced for regulators, who need to quickly assimilate enough information to not only understand, but eventually govern the technology in question. Meanwhile, that technology permeates the industry at a breakneck pace and new habits are established, for better or worse. This adds significant pressure to a role that already deals with plenty. Regulators are perennially playing catch-up; it's a reactive role, governed by many factors outside their control. When something as transformative as artificial intelligence comes along, that handicap intensifies. AI adds an element of chaos. There is a huge amount of responsibility to govern it effectively; this feels like a critical moment in human development, and one that we must either get right or learn from quickly. This applies broadly, but also to specific industries like finance that represent a microcosm of modern society. Below we'll analyze the regulators' current positions, existing frameworks that AI already falls into, and where its regulation could be heading. In July 2023, SEC Chairman Gary Gensler expressed concerns over the use of AI in investment decision making. He stated that it brings the risk of accentuating the dominance of a small number of tech platforms, and questioned whether AI models can provide factually accurate, bias-free advice. Gensler was well positioned to query this - false rumors of his resignation circulated due to misinformation generated by AI. In June 2024, the SEC's Investor Advisory Committee held a panel discussion on the use of AI, and Gensler reiterated his concerns, stressing that it could lead to conflicts of interest between a platform and its customers. He also emphasized that fundamental requirements still apply, and "market participants still need to comply with our time-tested laws". Despite this, there had been little concrete guidance provided up to that point, with some proposals discussed last year remaining under consideration. FINRA In the 2024 FINRA Annual Regulatory Oversight Report, FINRA explicitly classified AI as an 'emerging risk', recommending that firms consider its pervasive impact and the regulatory consequences of its deployment. Ornella Bergeron, FINRA senior vice president of member supervision, said that despite the operational efficiencies afforded by developments in AI, there were worries. "While these tools can present really promising opportunities, their development has raised concerns about things like accuracy, privacy, bias and intellectual property." In May 2024, FINRA released updated FAQs to clarify its stance around AI-created content. These essentially stressed that regulatory standards still applied, and firms were accountable for their output regardless of whether it was generated by humans or AI. CFTC The Commodity Futures Trading Commission (CFTC) has been relatively active around AI. In May, it released a report entitled "Responsible Artificial Intelligence in Financial Markets: Opportunities, Risks & Recommendations." This seemed to signal the CFTC's desire to oversee the space. In summary, the agency appeared perturbed that AI "could erode public trust in financial markets". The report outlined potential risks, including the lack of transparency around AI decision processes. While the CFTC seemed happy to take the reins, the report called for continued collaboration across federal agencies. It also recommended hosting public roundtable discussions to foster a deeper understanding of AI's role in financial markets, and to develop transparent policies. How are existing frameworks impacted? Fundamental recordkeeping regulations like the SEC Marketing Rule and FINRA rule 2210 put strong emphasis on the accuracy and integrity of information that a firm communicates to its customers. The use of AI tools may well jeopardize these tenets due to the unpredictable and often inaccurate rhetoric that language models have built a reputation for. As FINRA earlier clarified, it is the content itself that firms will be held accountable for - the tools that are used to create it are not necessarily relevant. This means that at the very least, all machine-generated output should be reviewed thoroughly before publication. AI-Washing Despite much regulation around AI barely reaching the proposal stage, we have already begun to see enforcement in some relevant areas. In March, the SEC launched enforcement actions targeting 'AI-washing' -- accusing two investment advisory firms of exaggerating the use of AI in their products and services to mislead investors. While the penalties imposed in these cases were minimal, the director of the SEC's Enforcement Division, Gurbir Grewal, confirmed that they hoped to send a message to the industry. "I hope these actions put the investment industry on notice. If you are rushing to make claims about using AI in your investment processes to capitalize on growing investor interest, stop. Take a step back, and ask yourselves: do these representations accurately reflect what we are doing or are they simply aspirational? "If it's the latter, your actions may constitute the type of "AI-washing" that violates the federal securities laws." At June's Investment Advisory Committee meeting, the SEC discussed rules which were initially proposed in July 2023, addressing potential conflicts of interest from using predictive data analytics (PDA) in investor interactions. The proposals called for any of these conflicts of interest to be recorded, and then quickly eliminated. The June 6th panel participants were largely supportive of these proposals, which are now expected to proceed quickly. In the meantime, by quickly applying punishments and sending a message on AI-washing, the SEC appears eager to show strength through enforcement in more clear-cut scenarios. FINRA As well as confirming companies' responsibility for chatbot generated output, the updates to FINRA's FAQs stressed that firms must also supervise these communications. This means that policies and procedures must be established. Those guidelines could address how technologies are selected in the procurement phase, how staff are trained to use them, what level of human oversight exists after content has been generated etc. If firms have already adopted chatbot technology, or if they're considering it, the next step should be to develop this internal framework. CFTC The CFTC's forthright views on how AI should be regulated showed a clear commitment to taking responsibility and leading the way. They encouraged public discourse and collaboration across agencies, while their report identified "opportunities, risks and recommendations". The next step, again, is to build that information into a formalized framework. Meanwhile, the Department of the Treasury published a request for information on the use of AI in the financial services sector, four months after the CFTC did the same. They specifically highlighted a potential 'human capital shortage' - a scenario whereby companies use AI tools, with insufficient employees fully understanding their intricacies. The Treasury's involvement has amplified the voices of the CFTC, FINRA and the SEC, and it's now just a case of waiting for their frameworks to be collectively drafted. That may not take as long as anticipated. In a fitting development, regulators are using AI themselves now to help them keep up. "The SEC has begun analyzing how generative AI models could potentially help tackle the regulators' workload", said Scott Gilbert, vice-president, risk monitoring, member supervision with FINRA, at the FINRA conference. The human touch A recent report from the FINRA Investor Education Foundation revealed that despite AI's increasing influence across society, few consumers would rely on it for personal finance advice, and remain skeptical about the financial information it produces. This lack of consumer trust backs up the regulatory concerns we have dissected, and raises the likelihood of strict governance. Just as it took several years for regulators to catch up with WhatsApp use across the industry, there is always a grace period. However, just because new technology is not specifically named in existing frameworks, it doesn't mean that organizations like the SEC will have any hesitation backdating penalties which undermine their fundamental principles. While regulators deliberate over frameworks for AI models and the content they generate, firms must record all output, by man or machine. This will ensure compliance is covered from all angles; foundational principles and modern interpretations.
Share
Share
Copy Link
The financial services industry is cautiously embracing AI technologies, recognizing both their potential and risks. This story explores the current landscape, challenges, and future outlook of AI adoption in fintech.
The financial services industry is at a crossroads, carefully navigating the integration of Artificial Intelligence (AI) into its operations. As we move into the latter half of 2024, a picture of cautious optimism is emerging, with institutions recognizing both the transformative potential and the inherent risks of AI technologies 1.
Financial institutions are increasingly adopting AI for various applications, from customer service chatbots to complex risk assessment models. However, this adoption comes with a heightened awareness of the need for robust governance and risk management frameworks. The industry is grappling with challenges such as data privacy, algorithmic bias, and the potential for AI-driven financial crimes 2.
One of the most significant hurdles in AI adoption is ensuring compliance with existing and emerging regulations. Financial institutions are investing heavily in developing AI systems that can meet stringent regulatory requirements while still delivering innovative solutions. The use of AI chatbots in customer interactions, for instance, has raised questions about maintaining compliance in areas such as know-your-customer (KYC) and anti-money laundering (AML) protocols 4.
As the initial excitement around AI begins to settle, financial institutions are focusing on the practical aspects of implementation. This includes addressing the "devil in the details" – the nuanced challenges that arise when working with AI products in real-world scenarios 3. Issues such as data quality, model interpretability, and the need for human oversight are coming to the forefront.
The financial services industry is charting a course of thoughtful AI adoption. This approach involves:
As we look to the future, it's clear that AI will play an increasingly important role in financial services. However, the industry's success will depend on its ability to harness AI's potential while effectively mitigating its risks. The coming years will likely see a continued focus on responsible AI adoption, with an emphasis on transparency, accountability, and customer trust.
Reference
[1]
[4]
Recent Forbes articles highlight the immense potential of generative AI to revolutionize business operations and drive competitive advantages for enterprises that successfully adopt and integrate these cutting-edge technologies.
4 Sources
As AI reshapes the business landscape, leaders face new challenges in skill development, software evolution, and ethical considerations. This story explores the impact of AI on leadership, technology, and business readiness.
8 Sources
As AI technology rapidly advances, businesses are exploring the potential of generative AI and large language models. This article examines the current state of AI, its applications, and the challenges organizations face in implementation.
6 Sources
Red Hat introduces AI tools to enhance business efficiency, while experts outline key rules for successful AI adoption in companies. This story explores the intersection of AI innovation and practical implementation in the corporate world.
2 Sources
Generative AI is revolutionizing various aspects of business, from executive strategies to employee productivity and consumer products. This article explores the multifaceted impact of AI across different sectors and its potential for future growth.
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved