Written by Dom Partridge, Medical Writer
As artificial intelligence (AI) technology continues to advance, natural language processing (NLP) models like ChatGPT have gained a lot of attention for their ability to generate human-like text. However, while ChatGPT and other NLP models may be impressive, they are not yet capable of truly replicating human intelligence and should not be mistaken for such.
NLP is the ability of a computer programme to understand, interpret, and generate human language. It involves using algorithms and machine learning techniques to analyse and process natural language data, such as speech and text. This technology is used in a variety of applications, including language translation, voice recognition, and text summarisation. A common use for NLP that most writers use are spelling and grammar checkers, such as Microsoft® Word’s Editor or Grammarly. The NLP algorithms used by these checkers analyse text and compare it with a pre-existing database of correctly spelled words and grammatically correct sentences. When an error is detected, the checker suggests a correction or supplies a warning to the user. This allows the user to quickly find and fix errors in their writing, improving the overall quality and clarity of their text.
However, recent advances in AI, such as the release of ChatGPT and DALL-E (used to create the image accompanying this blog), have opened new possibilities for medical writers.
For example, it is possible for AI models, including ChatGPT, to generate value messaging from uploaded documents and a simple command prompt. Value messaging refers to the core messages and key points that a company wishes to communicate to its audience. These messages are typically used to highlight the benefits and advantages of a product, service, or idea. Using natural language processing algorithms, AI models can analyse the key features and benefits of a product or service, and then generate compelling value messages that effectively communicate these key points to the audience. However, although AI can generate text that is grammatically correct with a coherent flow, it cannot yet replicate the nuance and understanding of a human writer. Additionally, AI currently lacks the ability to create infographics from text, although it can pull out key points.
Another example is the ability of AI models to summarise technical documents quickly and efficiently. This has the potential to speed up a number of tasks that medical writers perform, such as producing clinical study reports, searching for specific data points, and researching new disease areas. However, the currently available models cannot yet actively search the internet for this information, and therefore the text to be summarised needs to be uploaded, which may have implications for confidentiality.
In conclusion, while ChatGPT and other AI systems may be able to perform some tasks that have traditionally been done by people, they are not currently capable of truly replicating human intelligence and creativity. These systems are only as good as the data they are trained on, and lack the ability for original thought and critical thinking that humans possess. For this reason, it may be best to treat AI models as a useful tool rather than relying on them completely.
If you are interested in finding out more about our medical writing or value communications services, please contact the Market Access and Value Communication team at Source Health Economics, an independent consultancy specialising in evidence generation, health economics, and communication.