When I mean learning I do not mean translating big sentences or something like that, I mean things like: “Is this japanese sentence correctly written? 雲の日が好きです” In this example it answered: "
Yes, the Japanese sentence “雲の日が好きです” is correctly written. It translates to “I like cloudy days” in English. Here’s a breakdown of the sentence:
- 雲 (くも/kumo): This means “cloud.”
- の (no): This is a possessive particle that indicates the relationship between “cloud” and “day” in this context.
- 日 (ひ/hi): This means “day.”
- が (ga): This is a subject marker particle that marks “cloudy day” as the subject of the sentence.
- 好きです (すきです/suki desu): This means “like” or “to like” and is a polite way of expressing one’s preference. The verb 好き (suki) means “to like,” and です (desu) is a polite copula used to indicate politeness and formality.
Therefore, the sentence “雲の日が好きです” correctly conveys the meaning “I like cloudy days” in Japanese.
"
Is this correct,do you consider it reliable?
Sometimes I also ask some questions like: "What function does ‘こと’ grammatically fulfill on this sentence? … "
Definitely not. ChatGPT is known to “hallucinate”, i.e. make things up, so you can’t trust that everything it says will be accurate.
ChatGPT, being generative AI, is strong in fabrication. This is stuff like writing an email, producing a table of contents for a book, creating a python program to spec, or, as DigitalAudio suggested, proofreading.
It is not very strong in fact-checking, or explaining things in a different way. See this recent MDN issue, for example: https://github.com/mdn/yari/issues/9208
I think ChatGPT works fine for reviewing your text. It will usually manage to point out the bits that sound wrong or inaccurate in your text, and it’s surprisingly precise at doing it.
However, as everybody else pointed out, don’t expect accurate or well-researched answers from it, and take any grammatical or nuanced statements with a mountain of salt, because it’s highly likely that it’s completely wrong.
But I’ve also asked ChatGPT to proofread and adjust my emails for coworkers and clients, and to make sure they sound formal and are respectful enough, and I’ve found it to be very good at this. But I would suggest knowing enough Japanese to be able to fix anything that sounds off or is lost in translation.
I wouldn’t recommend it as a tool for beginners, but I definitely encourage advanced learners to give it a shot, since it really does improve, summarise and rephrase texts successfully a lot of the time.
I don’t remember where I read this first, but I liked this quote:
What’s so confusing about AI is that it’s good at things you wouldn’t expect it to, and it’s terrible at tasks that are stereotypically machine-like. Fact checking? Sourcing? Citations? It’s awful. But writing human-like text? Proofreading? Maintaining natural conversations? It’s awesome at that.
There are services like Lang-8 and HiNative where you can get your writing checked or ask questions, so I’d suggest using those instead.
To be fair, humans often give incorrect information too. Source: I am a human
The most important skill when working with LLMs, such as ChatGPT, Bard, bing, etc, is to be able to find out their bullshit. I think the best application for chatgpt is to write some training texts in japanese or translate from english, or give readings of kanjis. But if something feels off or weird, double check it. Your example was correctly translated, but I would estimate that reliability is somewhere between 60 and 70% depending on comlexity
But if something feels off or weird
That is the big trick. Something new learners don’t know.
In my experience, ChatGPT gives wrong or incorrect information when asked about the specifics of literally any subject.
For example, if you ask ChatGPT about the “Piano Puzzle” in Silent Hill 1, it will reply that the puzzle exists. Interestingly, if you ask it about the poem that coincides with this puzzle by asking it to recite the poem by its name only: “A Tale of Birds Without A Voice,” I was surprised to see ChatGPT correctly recited the poem. ChatGPT forgot to include the lines at the end of the poem starting with the Crow, and while it correctly provided the answer to the riddle, in the correct order, but it did not include the Crow.
However, it also invented steps to solve the puzzle. It said the piano keys are numbered, and just said “press the key corresponding to [bird name],” which is not helpful with regards to the puzzle solution. The piano keys are not infact numbered, and the puzzle solution requires the player find which piano keys make no sound, and press those keys in the correct order corresponding to the order of the birds that the poem describes.
Now, I asked ChatGPT to analyze the specific language used in the riddle poem. This poem requires the reader have prior knowledge of the color of the birds feathers to determine the color of the piano keys each bird means. I asked ChatGPT whether this prior knowledge was bad for riddle design, and it responded with “It can be a drawback, here are points to consider.” It then listed four points (accessibility, fairness, clarity and guidance, immersion and engagement) and explained how each of those relate to how a puzzle is designed (in a generic sense, not specific). It concluded in classic AI fashion by rewording its first paragraph and then saying “Ideally, puzzles should provide players with the necessary information or clues within the game world to encourage exploration, deduction, and problem-solving. This allows for a more inclusive and engaging puzzle experience.” Which is basically buzzword diarrhea.
All in all I think while ChatGPT is a great tool for creative exercises and as a suggestion tool for world building and other creative facets, anything that relies on being factual or correct should not be relying on ChatGPT. It simply provides too much wrong information too often. And while it sometimes gets things correct, a lie is best told between two truths.
Lol, did you make this reply with ChatGPT or “All in all” is just a nod towards text generated by it? Overall, your comment is hilarious XD
I didn’t use it to write the comment nor was it intentional, but I will acquiesce that it does sound like something ChatGPT might say. The most important point I was trying to make still stands though. ChatGPT is really bad as a reliable source of information, and anyone expecting it to give factual information about everything they ask it is a moron.
Bing AI is better as it gives sources etc etc
I was researching a science topic once using bing chat. It gave me absolutely invalid references that didn’t even include anything similar to what I was searching for. When I pointed that out, bing apologized and then gave me the same references 🤷