Resources

AddToAny

Google+ Facebook Twitter Twitter

ChatGPT: Implications for training

Dr Jim Taylor, IBMS Head of Digital Education, looks at the latest developments and whether we should embrace ChatGPT.

Around the world, artificial intelligence (AI) spell-checks our work, reviews X-rays and CT scans, analyses patient records and more. Most people are happy for AI to help in these situations because it saves time and/or keeps us safe. Generative-AI platforms, such as ChatGPT, are different from traditional AI systems because they assimilate information to produce new responses that were not pre-programmed. The fourth iteration of ChatGPT went viral in December 2022 because of its ability to understand nuance and context, making interaction with the platform much more human-like.

ChatGPT is able to foil plagiarism detection software, because it synthesises “new” text, based on the definition of the words that appear in the question and the context in which those words are used, rather than “cutting and pasting” information word for word. When I asked ChatGPT “how can ChatGPT help me as a trainee biomedical scientist in the UK?” (see box, right) – the response was vague and appears to be nothing to worry about, so what’s the problem?

Foiling plagiarism

An example of ChatGPT’s ability to generate student essays that foiled plagiarism detection software and independently achieve pass scores was in the June 2022 sample paper of the US medical licensing exam (USMLE). The USMLE is a standardised set of questions from across multiple medical disciplines and ChatGPT was about 55% accurate when answering multiple-choice questions. ChatGPT performed much better when used to answer open-ended questions (but not questions involving illustration, photography or graphs).


How can chatgpt help me as a trainee?

ChatGPT can assist as a learning tool for a trainee biomedical scientist in the UK, offering research support, clarifying concepts, providing writing guidance, and reinforcing knowledge. However, ensure information is verified from reliable sources, and rely on mentors, professors, and professional guidelines for accuracy and ethical practices.


Multiple-choice questions are either right or wrong, whereas open-ended questions are subject to interpretation by whoever is answering and marking them. Whilst ChatGPT admits that “responses are based on patterns from my training data… I might not have access to the specific answer or the most up-to-date information” it is able to produce answers that are relevant, contain useful information and are focused on the question. This poses an issue for any long, written pieces of work that are often used to assess the candidate’s knowledge and understanding of a topic during their training and / or studies. To reduce the potential for nefarious use of AI software (including ChatGPT) we will need to rethink the types of questions and tasks set for portfolio evidence in training laboratories and assessment tasks in higher education.

The fourth iteration of ChatGPT went viral because of its ability to understand nuance and context

Work with ChatGPT

This change in approach will require careful negotiation of “it’s always been done this way” resistance, including a conscious move away from training material that tests knowledge by factual recall and/or the subjective interpretation (by the assessor) of structured written-answer questions.

To achieve this, we need to re-think our training approaches, ensuring we work with ChatGPT rather than against it. This will mean updating training materials, whilst trying to manage a busy laboratory service – this is no easy task.


Would I be struck off?

Using ChatGPT to complete your professional registration portfolio may potentially be viewed as a breach of professional ethics and integrity by the Health and Care Professions Council (HCPC) in the UK. It is important to remember that your portfolio should reflect your own skills, knowledge, and experience as a biomedical scientist. It is advisable to consult with your professional body or regulatory authority regarding their specific guidelines on the use of AI language models like ChatGPT in professional documentation.


Here are some suggestions of how these changes could be implemented:

1. Review training materials:

Ensure that any questions set require a conceptual understanding, rather than (simply) factual recall; base questions on intent (e.g. “how/why would you…” or “show me how/why…”) and limit the use of questions requiring a standalone written response. Use, for example, tasks that involve interpreting images, control (Shewhart) charts along with notes on the discussion or a short self-reflection on what was learned by completing the task and how this will be implemented next time. The key is that the candidate evidences how they have engaged with a specific laboratory-based task that is unique to them and cannot be simulated by AI software. Miller’s pyramid model of medical education can really help with creating this type of activity.

Base factual recall questions on specific, local data (e.g. local reference ranges, procedures or policies); use platform-specific exemplar data (e.g. QA results, which require specific interpretation). The key here is specificity – the more specific to the platform, lab, process etc. the better, along with active demonstration of the activity, not a written piece on the theory of how a piece of equipment works.

2. Trainer/trainee mentality:

It is important to reinforce that plagiarism is dishonest and calls people’s integrity into question. Using other people’s words to explain a technique or a diagram does not show any level of knowledge and understanding, just the ability to copy and paste text. I asked ChatGPT “would the HCPC strike me off for using ChatGPT to complete part of my professional registration portfolio?” – the answer (see box, left) was “maybe”. If a candidate was to plagiarise material that was not detected prior to submission, the person may still get tripped-up at verification as they will not have assimilated the detailed knowledge required. If any plagiarism is detected by the training officer, it must be addressed, removed and the evidence redone prior to submission. If an assessor detects plagiarism, they will delay the assessment and the portfolio can be deemed to be null and void and a whole new portfolio would have to be completed. Work-place trainers and portfolio verifiers and examiners shouldn’t be afraid to call-out and check material they believe was not generated by the candidate. The IBMS supports trainers and verifiers when cases of plagiarism and misconduct using ChatGPT are identified – please get in touch with us and we can give you specific advice and guidance. We will intervene where appropriate. If you have suspicions, talk to trainees and investigate anti-AI tools, such as contentdetector.ai.

3. Establish a positive rapport with trainees

Most instances of plagiarism and academic misconduct result from students/trainees feeling pressure to complete their work and fear of failure. This is a really complex topic in itself but could result from either work-related or non-work stress, which usually lead to poor time management. Similarly, lack of understanding and a fear of asking for help are often cited as reasons for trainees engaging in this kind of misconduct. It is important for laboratory trainers, mentors and colleagues to be open, honest and approachable. The requirement to work accurately and put the patient first is always paramount and trainees should always be working within the scope of their practice. By creating a supportive training environment with “psychological safety”, it is easier to ask for clarification, admit mistakes, and learn from them. Building these positive working relationships makes the use of ChatGPT less likely... it will be worth it all around.

Image credit | Istock

Related Articles

Top