(Language Arts Version)
By Chloe Yang (5G) and Muching Yan, Timothy Yang (7G)

[Images from the internet, all rights to the internet original authors]
Hypothesis
Chloe:
I think that ChatGPT will currently be in the lead because, ChatGPT’s main language is English and Language Arts doesn’t include complex formulas and extremely precise functions.
Muching:
I think ChatGPT will be in the lead because language arts does not require complicated diagrams and scales like we do in math. (See Who is smarter, Artificial Intelligence or Human? Math Version)
Materials & Process
Materials
- Access to ChatGPT 4
- Access to ChatGPT 3.5
- ISEE Practice test
Process
We compared our scores with ChatGPT.
We both answered the question to the test and checked our answers.
We gave ChatGPT the same set of questions and graded them.
Our Interaction Experience
Working with ChatGPT 4 was unchallenging. The process was simple because we asked ChatGPT the questions and the answer came quickly with additional information that would make it easier for our parents to grade.
Working with Chatgpt 3.5 went smooth because we typed the questions the answer just came to us within seconds. It was a bit more difficult for our parents to grade but the process was short and brief.




ChatGPT 4 and 3.5’s answers were same! ChatGPT didn’t take a long time to answer the questions. Also, after we put the instructions in once, we didn’t have to put the information in again. This time the answers were not detailed and professional looking but they took a significantly less amount of time then they did on the math test.
Both ChatGPT 4 and 3.5 didn’t solve #5.
Both ChatGPT 4 and 3.5 had an excellent accuracy on the three tests.
Conclusion
ChatGPT is currently in the lead
To summarize, even though ChatGPT is currently in the lead, that doesn’t mean we should stop studying Language Arts. Let’s make this a chance to give us so(some) enthusiasm to help us get better at Language Arts. In the future, ChatGPT will definitely improve but so will we!
Our hypothesis was in some ways correct but may change in the course of time.
Side note: Sometimes ChatGPT is wrong and may mislead someone (This is called hallucination).
Have a Glance at our Procedure
For the data, we used tests where you had to choose the closest synonym to the given word. For example, a problem would look like this: #5: Miniscule
- Ferocious
- Humongous
- Tiny
- Sink
Then you would choose the closest synonym. For this example the answer would be: C)Tiny.
Both ChatGPT 3.5 and 4 only got one question incorrect. Both of them got #5 on the third test incorrect. The question was: 5. CONTENTED:
(A) diplomatic
(B) disgusted
(C) mammoth
(D) satisfied
The answer was D) Satisfied, and it puzzled us why both ChatGPT 3.5 and 4 got this particular question wrong. They both answered (A) diplomatic.
Acknowledgements
We would like to acknowledge/thank
- ISEE for supplying the problem set
- ChatGPT 4, ChatGPT 3.5, Chloe and Muching for participating
- Our parents for granting us the use of ChatGPT and helping us with the process