我试了一下,感觉它挺适合一本正经的闲聊,但没法做任何 rigorous 的事情,因为它不仅不能演绎,甚至不能理解一阶逻辑。下面是 2 个例子:
me: Let's assume that the Sun is either cold or white, and the Sun is white only if it is black. Now if we find the Sun is not cold, can we conclude that the Sun is black?
bot: No, we cannot conclude that the Sun is black. We can only conclude that the Sun is not cold. We cannot draw any other conclusions based on the information given.
me: If an element, a, either belongs to a set X, or another set Y, and two elements a, b equals to each other only if they both belong to same set. Now if x belongs to Y and x = y, can we conclude that y does not belong to X?
bot: No, because it is not possible to conclude anything about y based on the given information.
另外我发现一个有趣的事实:它把 Cauchy's Theorem 和 Cayley's Theorem 弄混了,而我也犯过这个错误。
【 在 lvsoft 的大作中提到: 】
: 这没啥奇怪的吧,openai觉得这个例子能说明chatgpt达到的高度,然后有人找到了漏洞而已。
: 最近连ai围棋都被人找到漏洞了,我觉得接下来很长一段时间内这都会是个经常发生的现象。
: 就是任何ai拿出来,你都有可能找到一个非常trivial的例子,让ai发生对普通人来说都不可能发生的难以理解的失效。
: ...................
--
FROM 188.67.137.*