Do you find it hard to tell exactly why there’s something wrong with your AI generated academic writing? This time we’re looking at some essential prompt engineering tips for writing your research. If your university bans the use of generative AI for this (and some still do), then please take these steps yourself instead of asking an AI tool to do it.
Why can’t some people recognise the defects in AI generated academic writing?
It’s particularly hard for two kinds of people to pinpoint why such writing is a bit off. Let’s take AI generated writing in English as an example. These people are:
Speakers of English as a second or third language
Native English speaking students new to university whose parents never studied at university
(I was in the second category once upon a time.)
It’s not easy for these kinds of students to distinguish well justified arguments from those that just seem to be reasonable but actually are not substantially well argued.
What should you avoid?
Sentences that begin with “research reveals” may not tell us the study’ aims, the methodology or how the investigated constructs were the operationalised. All these elements are important in evaluating the results of any research. Whether you are using AI or not, a list of short summaries of research findings is inadequate without this kind of detail.
Also why set out categories of a phenomenon unless we know the purpose or planning behind this type of classification, or even the tools used to achieve this way of dividing a phenomenon into different types?
What should you do?
It is important when you devise your research prompts to develop AI generated to include the nuances described above (or do this yourself when researching). If you do not, the text to the tools like ChatGPT spit out can seem simplistic. Nevertheless, such AI generated text can seem convincing, especially to less confident second language speakers or people new to the world of academia. Such generalised AI generated text is written as if it declares something to be true, despite the superficiality of the details or reasoning.
One user of paid AI explained to me that she asks it to scan a set of reliable references. This is a great idea. Firstly she has a good basis for the research and secondly she knows where the specific ideas come from. She also always adds the instruction: “Do not hallucinate.” This is clearcut. How can the tool know when it is hallucinating or not?
One final point about AI generated text is: be careful how you ask it to write for an academic audience. When I tried this and just asked it to “use an academic style,” it generated long and complex wording which was completely unnecessary and even hid the main points. Nevertheless, some people new to the using the language in a university environment may think this is actually a sophisticated and appropriate style. Wrong!
I hope these tips help you to develop your art of prompt engineering. Of course, if your university forbids the use of generative AI for this kind of research, then please do this yourself rather than asking an AI service to do it for you.