On January 24, 2024, the CFTC's newly formed AI Task Force announced the issuance of a request for comment (RFC) on the current and potential uses and risks of artificial intelligence (AI) in the derivatives markets the CFTC regulates. The CFTC is particularly interested in the compliance efforts in this area and included 20 questions focused on how CFTC-regulated firms use AI and mitigate AI-related risks. The following day, the CFTC issued a Customer Advisory warning the public about AI-driven fraud and scams.

The RFC signals the CFTC entry into a larger movement among financial regulators aiming to understand how companies utilize and manage AI. The themes highlighted in the RFC align with those in requests from other regulators including the Biden Administration's Executive Order on AI, focusing on fairness, transparency, explainability, and security. See Executive Order 14110, which we discuss here. In addition to exploring potential risks, the RFC also delves into how firms leverage AI to enhance compliance programs, particularly in areas such as AML/KYC, detecting/preventing market manipulation and other financial crimes. Although these technologies are still emerging, firms may anticipate future pressure from regulators to integrate AI into risk-based financial crime programs, especially considering the increasing use of AI by illicit actors.

The CFTC acknowledged a recent publication by the International Organization of Securities Commissions ("IOSCO") that found market intermediaries are deploying AI and machine learning for uses including advisory and support services, client identification and monitoring (including compliance with know-your-customer obligations), and risk management. See International Organization of Securities Commissions, The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers (Jan. 24, 2024). Furthermore, the CFTC noted that generative AI could be used to leverage market analysis, supplement human analysis, and mitigate investment risk by designing and implementing hedging strategies.

The CFTC is specifically interested in feedback on AI applications across various areas such as trading, risk management, compliance, cybersecurity, recordkeeping, data processing, analytics, and customer interactions. Regarding compliance, the agency emphasized AI's potential impact on surveillance, AML efforts, and regulatory reporting functions.

In the press release, CFTC Chair Rostin Behnam emphasized that the RFC will enhance the Commission's strategic identification of high-priority projects with AI applications, thereby optimizing its data-driven approach to policy, surveillance, and enforcement, which is consistent with Executive Order 13960. Commissioner Kristin N. Johnson highlighted the increasing integration of AI in the economy and the financial sector and the purpose of the RFC being to seek to better understand the challenges and concerns associated with the application of AI in CFTC-regulated markets. The RFC also referenced the use of AI for advisory and support services, client identification and monitoring, and risk management. Input received could influence future CFTC guidance, interpretations, policy statements, or regulations. The deadline for comments is April 24, 2024.

Next Steps

CFTC-regulated firms should establish a governance framework to supervise responsible AI usage, including that of third-party providers. They should ensure transparency by being able to explain AI-generated outcomes and permitting human intervention in AI automated processes. Additionally, firms should transparently disclose the use of AI in customer-facing operations, allowing customers to opt out and connect with a human. DWT's commodities team, along with its AI and cybersecurity teams, can assist with preparing comments to the RFC, developing and implementing a governance framework if you are planning on utilizing AI in your business, or ensuring compliance with regulatory expectations if AI is already implemented.