Oracle’s Vision and Strategy Around AI and Generative AI
We recently spoke with Richard Smith, vice president of technology EMEA at Oracle, about Oracle’s vision and strategy around AI and of course also generative AI. Many tech giants are currently working on that. We asked him what he thinks of the call for a six-month pause for AI development.
Smith says: “I don’t know if a break is going to help right now, at Oracle we’ve been using AI for a long time. We especially need standards that set boundaries to delineate an AI, because risks arise without those boundaries.”
As an example, he mentions how influenceable the new AI models are and the risk that this entails. If those new AI models read often enough that the sky is green, they will believe it. If you extend that line to the health industry and you introduce incorrect data there, wrong decisions will soon follow. The consequences can then be catastrophic.
Oracle’s AI Infrastructure
At Oracle they are not averse to AI, they have been using it for years. The most well-known example is the autonomous database. A fully automated database maintained by an AI. Based on the use of the database, the configuration is optimized so that it performs better. If patches are needed, they are automatically rolled out and if something goes wrong, problems are automatically solved. All based on AI.
In addition, Oracle develops so-called Fusion apps, applications for organizations with which they can manage business. Examples include Oracle Fusion ERP, Oracle Fusion HCM, but there are also specific solutions for healthcare, financial institutions, telecom providers and hotels. All those applications use AI to provide a better experience or to detect things that can help customers.
Data Integrity is Paramount
When developing AI models, Oracle always has a huge focus on data integrity. If you cannot trust your data, then the AI models you unleash on it are of no use to you. Wrong data leads to wrong analyses. For example, Oracle has an AI in Fusion HCM that should prevent HR from being biased. Preventing prejudice (bias) is one of the most frequent discussions in the AI world, because it is extremely difficult. It is only possible with very good data integrity.
Oracle and generative AI
According to Smith, Oracle is looking at generative AI. What can Oracle do with it and how can it possibly add it to its products? What added value would it bring and how well does it work? Oracle has not entered into a partnership with OpenAI.
At the moment, according to Smith, Oracle is more focused on the underlying infrastructure and mainly focuses on specialized generative AI. Oracle has quite a few large AI projects and generative AI certainly appeals to the company. Yet Oracle does not seem to be getting involved in the race for the best chatbot, as OpenAI, Microsoft and Google do.
Oracle is looking to provide the best possible infrastructure for AI development. This includes the development of standards that set boundaries to delineate an AI. This is important because of the risks that arise without those boundaries. Oracle is also looking to ensure data integrity is paramount when developing AI models. This is because wrong data leads to wrong analyses and can have catastrophic consequences.
Oracle is also looking to develop specialized generative AI and Fusion apps that use AI to provide a better experience or to detect things that can help customers. Oracle has not entered into a partnership with OpenAI, but is instead focusing on developing its own AI projects.