Research Reveals ChatGPT's Writing Proficiency Differs Across Industries -

Research Reveals ChatGPT’s Writing Proficiency Differs Across Industries

ChatGPT: Optimizing Language Models for Dialogue

The recent release of ChatGPT, a chatbot developed by OpenAI, has created a lot of buzz and raised the possibility of a future where AI takes over the role of human content creators. However, a new study conducted on ChatGPT’s writing proficiency has found that its ability to generate high-quality content varies across different industries. The study’s findings shed light on the current limitations of AI in content creation and suggest that human expertise and input are still essential in certain areas.

The Survey

Tooltester, a company run by experts to review various digital platform tools, conducted a survey of over 1,900 Americans to determine if ChatGPT is better at writing more human content based on different topics.

25 questions were selected from the entertainment, finance, technology, travel, and health sectors based on their high search volume on Google. For each question, there were three possible answers: one from ChatGPT (AI), one from a human journalist, and another created by AI and then edited by a human copywriter. The order of the questions and answers was randomized so that users could only see one answer per question.

ChatGPT was given prompts related to the topic of the question and was asked to provide simple answers similar to an expert. Any content that revealed that it was written by an AI was removed from the final answer. Human-written content was sourced from expert sites that had in-depth coverage of the topic. Sites that disclosed the use of AI in their content were not used for this analysis.

The Results

Writing codes in the laptop for chatgpt

Tooltester’s research aimed to determine if readers could distinguish between content created by AI and human authors in five key sectors: entertainment, finance, travel, technology, and health. The following provides an overview of their findings from the 1,900 participants:

Technology Content: AI Writing Identified Most Often

The technology sector had the highest percentage of readers (51%) who correctly identified AI-generated content. Our technology-related questions covered topics such as cell phones, smart technology, computer hardware, AI, and internet providers. Interestingly, women were slightly better at recognizing AI-generated tech content than men.

Entertainment Content: AI Writing More Likely to Deceive Younger Readers

For entertainment writing, which covered topics such as movies, music, theater, streaming, and video games, 47.3% of respondents correctly identified AI content. However, those aged 18-24 were more likely to mistake AI-generated content for human-authored material (41.1%). Meanwhile, for human-written entertainment content, only 38.9% of readers correctly guessed it was written by a human.

Travel Content: Readers Found It Harder to Identify AI Content

47% of the participants correctly identified AI-generated travel content, which included information on affordable flights and hotels, rental car tips, and outdoor travel preparation. The remaining 35.9% believed it was written by a human. Interestingly, the human-written travel content polarized readers, with 41.6% guessing it was human-authored, while 40.5% thought it was written by AI.

Finance Content: AI Writing More Easily Identified

Readers correctly identified AI content in the finance sector 45.8% of the time, while 37.2% still believed that the AI text was written by a human. For human-authored content, 40.5% correctly guessed it was created by a human, but 42.5% thought it was generated by AI.

Health Content: AI Writing Most Deceptive

Our health-related questions covered topics such as mental health conditions, hip replacement costs, preventative health screenings, fitness plans, and the dangers of paracetamol. Shockingly, 53.1% of readers believed that AI content was written by a human. For human-authored health content, most people (44.9%) thought it was generated by AI, while 37.9% believed it was human-authored. This finding is especially concerning as AI becomes more prevalent in the healthcare industry.

Summary

Tooltester’s study concludes that people find it difficult to differentiate between content written by AI and human authors across different industries. However, the level of difficulty varies between sectors, with technology content being the most easily identified as AI-generated, and health content being the most deceptive.

Throughout the survey, younger readers (18-24 years old) generally struggle the most in identifying AI-generated content, and readers aged 65 and higher were able to identify AI-generated content.

 

Sources:
Unsplash – Image 1, Image 2
tooltester.com