Even with the rapid development and use of digitization in healthcare, there is still a very large potential in the field of artificial intelligence (AI) . Among many other areas, diagnostic decision support systems (DDSS) seem to be particularly promising. Especially in the field of sports medicine, where acute injuries but also chronic pathologies are common not only in elite athletes but also in amateurs . While some traumatic injuries and severe pathologies will require immediate involvement of medical personal, the use of a DDSS might help patients and medical professionals to better understand less clear complaints more effectively, resulting in faster and more proper treatment. To establish an initial impression of the potential of a DDSS, 5 different fictional sport injuries and pathologies were analyzed with an AI chatbot app in this case report.
Concerning the selected injuries and pathologies of this report, the authors felt these are fairly typical, and are widely occurring among a variety of different sports. In the case of “simple complaints” such as muscle pain or tennis elbow, the information provided by the app may help assist the patient to self-treat on their own. Although, it should be critically noted that the tennis elbow diagnosis could have been more precise if the app was more knowledgeable about orthopedic pathologies and questioned about the medial versus lateral side of the elbow. In the case of the ankle sprain, the recommendation to visit an emergency unit is to be considered satisfactory, since it may be hard to distinguish fractures from ligament injuries after such a trauma and therefore a professional medical examination (with possible x-ray) could provide decisive clarity . Also, in the case with the concussion, the recommendation to consult a doctor is certainly correct . Here, the biggest criticism is that the recommendation did not take time into consideration, which could cause a delay, resulting in a more dire outcome. Hence, a guideline-corresponding follow-up by AI algorithms after minutes to hours could be built into the app to enable a fast implementation from the recommendations . Other studies have already shown significant differences in diagnostic capacities among different algorithms in the context of concussion , but data also suggested a great potential of AI diagnostic support as assisting tool to clinicians .
The app used in the present case report showed weaknesses in chronic ACL instability, since the reoccurring chronic character of the condition was not sufficiently captured by the apps selected questions and the suggested main diagnosis of a bursitis would rather not have gone along with a feeling of instability. However, in the case of an ACL rupture, the app algorithm had suggested to visit an emergency department, which would not have been suitable in this specific case with a chronic condition. Consequently, it can be assumed that the recommended visitation to a doctor – even though for the wrong primary diagnosis of bursitis - would have been beneficial for this case, likely eventually leading to a correct human-made diagnosis.
Various aspects must be critically discussed in the context of this report. One major limitation is that this case report only used one of various existing apps on the market, and the efficacy will likely vary between different algorithms . In general, it has to be acknowledged that nowadays the purpose of AI-based chatbots cannot yet be seen as diagnosing complex clinical injuries or pathologies, instead is intended to give patients useful insight before getting a chance to meet or talk with a medical professional.
Another critical aspect, previously mentioned, is the potential dependence on the user’s understanding, as it has been shown that different users’ knowledge could lead to different results with an DDSS app . It is therefore unclear whether the fictional patients would have given the same answers in the same way in real life achieving the same results presented here.
Critical aspects of current AI applications can also be suspected in the case of mild concussion or muscle pain. The fictitious clinical scenarios used in this study, involving these conditions, were all correctly diagnosed here. However, in a real medical setting, both scenarios can be fluid processes that - in extreme cases - could turn into a severe traumatic brain injury or a compartment syndrome. Thus, follow-ups should be provided by the app, if patients decide not to seek medical support.
This may pose legal problems: Is the manufacturer responsible, if the app does not recognize a pathology correctly? Patient confidence in an app can be high , even if the chatbot apps – like the one used in this report – indicate that they do not make any medical diagnoses or that qualified health care providers should be contacted regarding any medical issues . However, this is still very vague in the current version, because especially in the case of concussions or compartment syndromes, hours could already bring dramatic changes. In addition to improving algorithms, a follow-up function by actively reminding the patient via the app (perhaps by push notification) would be a possible option - similar to a follow-up examination in a hospital setting.
In the context of DDSS apps, questions about how doctors should deal with false-positive findings will also be interesting - could this lead to an overtreatment for fear of legal consequences? On the other hand, what could the legal consequences be if physicians allowed themselves to be influenced by false negative app findings?
What is certain is that despite many human-made misdiagnoses, in up to 12 million cases or 5% of all adult patients in the USA , any serious AI errors will likely lead to similar media attention as seen today in a one-off accident with a semi-autonomously driving Tesla car, for example. One day, the use of AI may help to diagnose and even predict the occurrence of sport related injuries . However, as of today, AI-based chatbots still appear to lag behind other algorithms using machine learning since natural language processing is still a complicated issue and most FDA approved AI-based medical technologies do not use it .