Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation

Search Page

Filters

My NCBI Filters

Results by year

Table representation of search results timeline featuring number of search results per year.

Year Number of Results
1800 1
1838 1
1839 1
1842 1
1844 2
1845 1
1855 1
1864 1
1865 1
1868 1
1871 2
1875 1
1878 2
1879 3
1881 1
1883 2
1884 3
1887 1
1888 2
1890 1
1892 1
1893 1
1894 1
1895 1
1896 1
1906 1
1910 2
1911 1
1912 4
1913 3
1915 1
1916 1
1917 3
1919 3
1920 3
1921 3
1922 3
1923 3
1924 3
1925 3
1926 1
1927 2
1928 2
1929 1
1930 2
1931 2
1932 1
1933 2
1934 3
1936 1
1937 2
1938 1
1940 3
1941 6
1942 2
1943 1
1944 1
1945 25
1946 62
1947 57
1948 44
1949 37
1950 38
1951 92
1952 197
1953 197
1954 145
1955 191
1956 265
1957 148
1958 179
1959 140
1960 153
1961 139
1962 165
1963 150
1964 201
1965 232
1966 278
1967 298
1968 410
1969 506
1970 574
1971 594
1972 756
1973 753
1974 802
1975 1975
1976 2092
1977 1924
1978 1925
1979 1902
1980 2314
1981 2249
1982 2381
1983 2762
1984 3158
1985 3187
1986 3404
1987 3486
1988 3895
1989 4513
1990 4711
1991 4827
1992 5183
1993 5415
1994 5504
1995 5666
1996 6247
1997 6395
1998 6331
1999 6619
2000 7124
2001 7617
2002 7876
2003 8191
2004 8879
2005 9791
2006 10519
2007 10690
2008 11111
2009 11357
2010 12027
2011 13293
2012 14210
2013 14622
2014 15180
2015 15390
2016 15403
2017 15988
2018 16646
2019 17859
2020 19934
2021 21908
2022 21138
2023 19795
2024 7679

Text availability

Article attribute

Article type

Publication date

Search Results

399,968 results

Results by year

Filters applied: . Clear all
The following term was not found in PubMed: 6-trimethyl-3-cyclohexenyl-4-penten-3-one
Page 1
Levothyroxine.
[No authors listed] [No authors listed] 2023 Sep 15. Drugs and Lactation Database (LactMed®) [Internet]. Bethesda (MD): National Institute of Child Health and Human Development; 2006–. 2023 Sep 15. Drugs and Lactation Database (LactMed®) [Internet]. Bethesda (MD): National Institute of Child Health and Human Development; 2006–. PMID: 30000062 Free Books & Documents. Review.
Comparison of ChatGPT-3.5, ChatGPT-4, and Orthopaedic Resident Performance on Orthopaedic Assessment Examinations.
Massey PA, Montgomery C, Zhang AS. Massey PA, et al. J Am Acad Orthop Surg. 2023 Dec 1;31(23):1173-1179. doi: 10.5435/JAAOS-D-23-00396. Epub 2023 Sep 4. J Am Acad Orthop Surg. 2023. PMID: 37671415 Free PMC article.

There was a difference among the three groups in testing success, with ortho residents scoring higher than ChatGPT-3.5 and GPT-4 ( P < 0.001 and P < 0.001). GPT-4 scored higher than ChatGPT-3.5 ( P = 0.002). ...Both ChatGPT-3.5 and

There was a difference among the three groups in testing success, with ortho residents scoring higher than ChatGPT-3.5 and GPT …
Biodegradation of hexahydro-1,3,5-trinitro-1,3,5-triazine.
McCormick NG, Cornell JH, Kaplan AM. McCormick NG, et al. Appl Environ Microbiol. 1981 Nov;42(5):817-23. doi: 10.1128/aem.42.5.817-823.1981. Appl Environ Microbiol. 1981. PMID: 16345884 Free PMC article.
Biodegradation of the explosive hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) occurs under anaerobic conditions, yielding a number of products, including: hexahydro-1-nitroso-3,5-dinitro-1,3,5-triazine, hexahydro-1,3-dinitro …
Biodegradation of the explosive hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) occurs under anaerobic conditions, …
Distinguishing ChatGPT(-3.5, -4)-generated and human-written papers through Japanese stylometric analysis.
Zaitsu W, Jin M. Zaitsu W, et al. PLoS One. 2023 Aug 9;18(8):e0288453. doi: 10.1371/journal.pone.0288453. eCollection 2023. PLoS One. 2023. PMID: 37556434 Free PMC article.
In this study, first, we compared Japanese stylometric features of texts generated by ChatGPT, equipped with GPT-3.5 and GPT-4, and those written by humans. In this work, we performed multi-dimensional scaling (MDS) to confirm the distributions of 216 texts of three …
In this study, first, we compared Japanese stylometric features of texts generated by ChatGPT, equipped with GPT-3.5 and GPT-4 …
GPT-4 in Nuclear Medicine Education: Does It Outperform GPT-3.5?
Currie GM. Currie GM. J Nucl Med Technol. 2023 Dec 5;51(4):314-317. doi: 10.2967/jnmt.123.266485. J Nucl Med Technol. 2023. PMID: 37852647
Results: ChatGPT powered by GPT-3.5 performed poorly in calculation examinations (31.4%), compared with GPT-4 (59.1%). GPT-3.5 failed each of 3 written tasks (39.9%), whereas GPT-4 passed each task (56.3%). ...
Results: ChatGPT powered by GPT-3.5 performed poorly in calculation examinations (31.4%), compared with GPT-4 (59.1%). GPT- …
Assessing the Performance of GPT-3.5 and GPT-4 on the 2023 Japanese Nursing Examination.
Kaneda Y, Takahashi R, Kaneda U, Akashima S, Okita H, Misaki S, Yamashiro A, Ozaki A, Tanimoto T. Kaneda Y, et al. Cureus. 2023 Aug 3;15(8):e42924. doi: 10.7759/cureus.42924. eCollection 2023 Aug. Cureus. 2023. PMID: 37667724 Free PMC article.
For each problem type, GPT-4 showed a higher accuracy rate than GPT-3.5. Specifically, the accuracy rates for compulsory questions improved from 58.0% with GPT-3.5 to 90.0% with GPT-4. For general questions, the rates went from 64.6% with GPT-3. …
For each problem type, GPT-4 showed a higher accuracy rate than GPT-3.5. Specifically, the accuracy rates for compulsory quest …
Performance of ChatGPT, GPT-4, and Google Bard on a Neurosurgery Oral Boards Preparation Question Bank.
Ali R, Tang OY, Connolly ID, Fridley JS, Shin JH, Zadnik Sullivan PL, Cielo D, Oyelese AA, Doberstein CE, Telfeian AE, Gokaslan ZL, Asaad WF. Ali R, et al. Neurosurgery. 2023 Nov 1;93(5):1090-1098. doi: 10.1227/neu.0000000000002551. Epub 2023 Jun 12. Neurosurgery. 2023. PMID: 37306460
BACKGROUND AND OBJECTIVES: General large language models (LLMs), such as ChatGPT (GPT-3.5), have demonstrated the capability to pass multiple-choice medical board examinations. ...By contrast, Bard scored 44.2% (66/149, 95% CI: 36.2%-52.6%). GPT-3.5 an …
BACKGROUND AND OBJECTIVES: General large language models (LLMs), such as ChatGPT (GPT-3.5), have demonstrated the capability t …
The Performance of GPT-3.5, GPT-4, and Bard on the Japanese National Dentist Examination: A Comparison Study.
Ohta K, Ohta S. Ohta K, et al. Cureus. 2023 Dec 12;15(12):e50369. doi: 10.7759/cureus.50369. eCollection 2023 Dec. Cureus. 2023. PMID: 38213361 Free PMC article.
Results The overall correct response rates were 73.5% for GPT-4, 66.5% for Bard, and 51.9% for GPT-3.5. GPT-4 showed a significantly higher correct response rate than Bard and GPT-3.5. ...GPT-4 outperformed GPT-3.5 and Bard (p<0.01). T …
Results The overall correct response rates were 73.5% for GPT-4, 66.5% for Bard, and 51.9% for GPT-3.5. GPT-4 showed a signifi …
ChatGPT and Patient Information in Nuclear Medicine: GPT-3.5 Versus GPT-4.
Currie G, Robbie S, Tually P. Currie G, et al. J Nucl Med Technol. 2023 Dec 5;51(4):307-313. doi: 10.2967/jnmt.123.266151. J Nucl Med Technol. 2023. PMID: 37699647
The GPT-3.5-powered ChatGPT was released in late November 2022 powered by the generative pretrained transformer (GPT) version 3.5. ...GPT-3.5 produced patient information deemed not fit for the purpose. ...
The GPT-3.5-powered ChatGPT was released in late November 2022 powered by the generative pretrained transformer (GPT) version …
Artificial Intelligence in Ophthalmology: A Comparative Analysis of GPT-3.5, GPT-4, and Human Expertise in Answering StatPearls Questions.
Moshirfar M, Altaf AW, Stoakes IM, Tuttle JJ, Hoopes PC. Moshirfar M, et al. Cureus. 2023 Jun 22;15(6):e40822. doi: 10.7759/cureus.40822. eCollection 2023 Jun. Cureus. 2023. PMID: 37485215 Free PMC article.
Results GPT-4 outperformed both GPT-3.5 and human professionals on ophthalmology StatPearls questions, except in the "Lens and Cataract" category. ...There were variations in performance across difficulty levels (rated one to four), but GPT-4 consistently performed …
Results GPT-4 outperformed both GPT-3.5 and human professionals on ophthalmology StatPearls questions, except in the "Lens and …
399,968 results
You have reached the last available page of results. Please see the User Guide for more information.