As a graduate course instructor gearing up to teach my first course in the ChatGPT era, I’ve wondered how students, novice clinicians, and clinical generalists (those without expertise with a specific clinical population) might rely on generative AI to fill in knowledge gaps and generate ideas for therapy. Speech-language pathology has an enormous scope of practice with constantly updating research, making it practically impossible for an average clinician to keep up with best practices in any one area. And if you do try to dive in to refresh your knowledge, it’s difficult to find your footing in a sea of information.
If we start relying on tools like ChatGPT to filter through the noise and bubble up the key ideas, will this facilitate more evidence-based practice? Or will we be at risk of continuing the old mistakes, if that’s the bulk of the Internet’s text-based corpus?
With just a few weeks until class starts, I decided to test this out myself!
This post covers:
- The importance of prompts quality and asking the right questions
- Treatment planning prompts examples and responses
- Evaluation of ChatGPT output
- How to ensure evidence-based practice while using AI
- Implications for clinical educators