Yayın: Clinical Failure of General-Purpose AI in Photographic Scoliosis Assessment: A Diagnostic Accuracy Study
| dc.contributor.author | Aydin, Cemre | |
| dc.contributor.author | Duygu, Ozden Bedre | |
| dc.contributor.author | Karakas, Asli Beril | |
| dc.contributor.author | Er, Eda | |
| dc.contributor.author | Gokmen, Gokhan | |
| dc.contributor.author | Ozturk, Anil Murat | |
| dc.contributor.author | Govsa, Figen | |
| dc.date.accessioned | 2026-01-04T22:17:04Z | |
| dc.date.issued | 2025-07-25 | |
| dc.description.abstract | Background and Objectives: General-purpose multimodal large language models (LLMs) are increasingly used for medical image interpretation despite lacking clinical validation. This study evaluates the diagnostic reliability of ChatGPT-4o and Claude 2 in photographic assessment of adolescent idiopathic scoliosis (AIS) against radiological standards. This study examines two critical questions: whether families can derive reliable preliminary assessments from LLMs through analysis of clinical photographs and whether LLMs exhibit cognitive fidelity in their visuospatial reasoning capabilities for AIS assessment. Materials and Methods: A prospective diagnostic accuracy study (STARD-compliant) analyzed 97 adolescents (74 with AIS and 23 with postural asymmetry). Standardized clinical photographs (nine views/patient) were assessed by two LLMs and two orthopedic residents against reference radiological measurements. Primary outcomes included diagnostic accuracy (sensitivity/specificity), Cobb angle concordance (Lin’s CCC), inter-rater reliability (Cohen’s κ), and measurement agreement (Bland–Altman LoA). Results: The LLMs exhibited hazardous diagnostic inaccuracy: ChatGPT misclassified all non-AIS cases (specificity 0% [95% CI: 0.0–14.8]), while Claude 2 generated 78.3% false positives. Systematic measurement errors exceeded clinical tolerance: ChatGPT overestimated thoracic curves by +10.74° (LoA: −21.45° to +42.92°), exceeding tolerance by >800%. Both LLMs showed inverse biomechanical concordance in thoracolumbar curves (CCC ≤ −0.106). Inter-rater reliability fell below random chance (ChatGPT κ = −0.039). Universal proportional bias (slopes ≈ −1.0) caused severe curve underestimation (e.g., 10–15° error for 50° deformities). Human evaluators demonstrated superior bias control (0.3–2.8° vs. 2.6–10.7°) but suboptimal specificity (21.7–26.1%) and hazardous lumbar concordance (CCC: −0.123). Conclusions: General-purpose LLMs demonstrate clinically unacceptable inaccuracy in photographic AIS assessment, contraindicating clinical deployment. Catastrophic false positives, systematic measurement errors exceeding tolerance by 480–1074%, and inverse diagnostic concordance necessitate urgent regulatory safeguards under frameworks like the EU AI Act. Neither LLMs nor photographic human assessment achieve reliability thresholds for standalone screening, mandating domain-specific algorithm development and integration of 3D modalities. | |
| dc.description.uri | https://doi.org/10.3390/medicina61081342 | |
| dc.description.uri | https://pmc.ncbi.nlm.nih.gov/articles/PMC12387722/ | |
| dc.description.uri | https://pubmed.ncbi.nlm.nih.gov/40870387/ | |
| dc.identifier.doi | 10.3390/medicina61081342 | |
| dc.identifier.eissn | 1648-9144 | |
| dc.identifier.openaire | doi_dedup___::c68c9d99a4e3573ef7671f20a7f0f7ad | |
| dc.identifier.orcid | 0000-0001-7140-7340 | |
| dc.identifier.orcid | 0000-0001-6504-6489 | |
| dc.identifier.orcid | 0000-0001-6366-3301 | |
| dc.identifier.orcid | 0009-0006-3510-8099 | |
| dc.identifier.orcid | 0000-0001-8674-8877 | |
| dc.identifier.orcid | 0000-0001-9635-6308 | |
| dc.identifier.pubmed | 40870387 | |
| dc.identifier.scopus | 2-s2.0-105014404938 | |
| dc.identifier.startpage | 1342 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.12597/42867 | |
| dc.identifier.volume | 61 | |
| dc.language.iso | eng | |
| dc.publisher | MDPI AG | |
| dc.relation.ispartof | Medicina | |
| dc.rights | OPEN | |
| dc.subject | Article | |
| dc.title | Clinical Failure of General-Purpose AI in Photographic Scoliosis Assessment: A Diagnostic Accuracy Study | |
| dc.type | Article | |
| dspace.entity.type | Publication | |
| local.api.response | {"authors":[{"fullName":"Cemre Aydin","name":"Cemre","surname":"Aydin","rank":1,"pid":null},{"fullName":"Ozden Bedre Duygu","name":"Ozden Bedre","surname":"Duygu","rank":2,"pid":{"id":{"scheme":"orcid_pending","value":"0000-0001-7140-7340"},"provenance":null}},{"fullName":"Asli Beril Karakas","name":"Asli Beril","surname":"Karakas","rank":3,"pid":{"id":{"scheme":"orcid_pending","value":"0000-0001-6504-6489"},"provenance":null}},{"fullName":"Eda Er","name":"Eda","surname":"Er","rank":4,"pid":{"id":{"scheme":"orcid_pending","value":"0000-0001-6366-3301"},"provenance":null}},{"fullName":"Gokhan Gokmen","name":"Gokhan","surname":"Gokmen","rank":5,"pid":{"id":{"scheme":"orcid","value":"0009-0006-3510-8099"},"provenance":null}},{"fullName":"Anil Murat Ozturk","name":"Anil Murat","surname":"Ozturk","rank":6,"pid":{"id":{"scheme":"orcid_pending","value":"0000-0001-8674-8877"},"provenance":null}},{"fullName":"Figen Govsa","name":"Figen","surname":"Govsa","rank":7,"pid":{"id":{"scheme":"orcid_pending","value":"0000-0001-9635-6308"},"provenance":null}}],"openAccessColor":null,"publiclyFunded":null,"type":"publication","language":{"code":"eng","label":"English"},"countries":null,"subjects":[{"subject":{"scheme":"keyword","value":"Article"},"provenance":null}],"mainTitle":"Clinical Failure of General-Purpose AI in Photographic Scoliosis Assessment: A Diagnostic Accuracy Study","subTitle":null,"descriptions":["<jats:p>Background and Objectives: General-purpose multimodal large language models (LLMs) are increasingly used for medical image interpretation despite lacking clinical validation. This study evaluates the diagnostic reliability of ChatGPT-4o and Claude 2 in photographic assessment of adolescent idiopathic scoliosis (AIS) against radiological standards. This study examines two critical questions: whether families can derive reliable preliminary assessments from LLMs through analysis of clinical photographs and whether LLMs exhibit cognitive fidelity in their visuospatial reasoning capabilities for AIS assessment. Materials and Methods: A prospective diagnostic accuracy study (STARD-compliant) analyzed 97 adolescents (74 with AIS and 23 with postural asymmetry). Standardized clinical photographs (nine views/patient) were assessed by two LLMs and two orthopedic residents against reference radiological measurements. Primary outcomes included diagnostic accuracy (sensitivity/specificity), Cobb angle concordance (Lin’s CCC), inter-rater reliability (Cohen’s κ), and measurement agreement (Bland–Altman LoA). Results: The LLMs exhibited hazardous diagnostic inaccuracy: ChatGPT misclassified all non-AIS cases (specificity 0% [95% CI: 0.0–14.8]), while Claude 2 generated 78.3% false positives. Systematic measurement errors exceeded clinical tolerance: ChatGPT overestimated thoracic curves by +10.74° (LoA: −21.45° to +42.92°), exceeding tolerance by >800%. Both LLMs showed inverse biomechanical concordance in thoracolumbar curves (CCC ≤ −0.106). Inter-rater reliability fell below random chance (ChatGPT κ = −0.039). Universal proportional bias (slopes ≈ −1.0) caused severe curve underestimation (e.g., 10–15° error for 50° deformities). Human evaluators demonstrated superior bias control (0.3–2.8° vs. 2.6–10.7°) but suboptimal specificity (21.7–26.1%) and hazardous lumbar concordance (CCC: −0.123). Conclusions: General-purpose LLMs demonstrate clinically unacceptable inaccuracy in photographic AIS assessment, contraindicating clinical deployment. Catastrophic false positives, systematic measurement errors exceeding tolerance by 480–1074%, and inverse diagnostic concordance necessitate urgent regulatory safeguards under frameworks like the EU AI Act. Neither LLMs nor photographic human assessment achieve reliability thresholds for standalone screening, mandating domain-specific algorithm development and integration of 3D modalities.</jats:p>"],"publicationDate":"2025-07-25","publisher":"MDPI AG","embargoEndDate":null,"sources":["Crossref","Medicina (Kaunas)"],"formats":null,"contributors":null,"coverages":null,"bestAccessRight":{"code":"c_abf2","label":"OPEN","scheme":"http://vocabularies.coar-repositories.org/documentation/access_rights/"},"container":{"name":"Medicina","issnPrinted":null,"issnOnline":"1648-9144","issnLinking":null,"ep":null,"iss":null,"sp":"1342","vol":"61","edition":null,"conferencePlace":null,"conferenceDate":null},"documentationUrls":null,"codeRepositoryUrl":null,"programmingLanguage":null,"contactPeople":null,"contactGroups":null,"tools":null,"size":null,"version":null,"geoLocations":null,"id":"doi_dedup___::c68c9d99a4e3573ef7671f20a7f0f7ad","originalIds":["medicina61081342","10.3390/medicina61081342","50|doiboost____|c68c9d99a4e3573ef7671f20a7f0f7ad","50|od_______267::6fb43e3827f9df9d33644893a588a8da","oai:pubmedcentral.nih.gov:12387722"],"pids":[{"scheme":"doi","value":"10.3390/medicina61081342"}],"dateOfCollection":null,"lastUpdateTimeStamp":null,"indicators":{"citationImpact":{"citationCount":0,"influence":2.5349236e-9,"popularity":2.8669784e-9,"impulse":0,"citationClass":"C5","influenceClass":"C5","impulseClass":"C5","popularityClass":"C5"}},"instances":[{"pids":[{"scheme":"doi","value":"10.3390/medicina61081342"}],"license":"CC BY","type":"Article","urls":["https://doi.org/10.3390/medicina61081342"],"publicationDate":"2025-07-25","refereed":"peerReviewed"},{"alternateIdentifiers":[{"scheme":"doi","value":"10.3390/medicina61081342"}],"license":"CC BY","type":"Other literature type","urls":["https://pmc.ncbi.nlm.nih.gov/articles/PMC12387722/","https://pubmed.ncbi.nlm.nih.gov/40870387/","https://doi.org/10.3390/medicina61081342"],"publicationDate":"2025-07-25","refereed":"nonPeerReviewed"}],"isGreen":null,"isInDiamondJournal":null} | |
| local.import.source | OpenAire | |
| local.indexed.at | Scopus | |
| local.indexed.at | PubMed |
