How good is ChatGPT at solving mathematical puzzles? Scientific American magazine found the answer to this question . They ask ChatGPT to solve some puzzles. The answers to the puzzles will show how good ChatGPT is in math skills. The puzzles are created by renowned puzzle maker Martin Gardner. Kazi Akash, a member of Vigyanchinta’s editorial team, adapted the article on the experiment in Scientific American .
Riddle 1
There are three switches on the ground floor of a building. The lights can be turned on and off by turning the switches on or off. A lamp on the third floor lights up with just one of these switches. The other two switches have no function. That is useless. Hit any of the three switches and go to the third floor of the building to see if the lamp is lit or not. On or off, you have to tell from the third floor which switch actually works. Remember, you only get one chance.
Now stop for a while and imagine yourself, what can be the solution to this puzzle. Then check chatgpt’s answer to see if the solution works for you.
As soon as the question was asked, ChatGPTK replied as follows-
First, turn on the first switch. After some time, i.e. after four-five minutes turn off the first switch and turn on the second switch and go to the third floor. If you go and see that the lamp is on, then you know that the second switch is working. And if you see, the lamp is not burning but the bulb is hot, then you must understand that the first switch is working. Because when you first turned on that switch, it lit the lamp and got hot. But if you see, the lamp is off and cold, then you understand that the third switch is active.
This answer is correct. But how does ChatGPT answer this intellectual puzzle? It cannot think for itself. University of Michigan computer scientist Kentaro Toyama. According to him, ChatGPT already knew the answer to this question. This was also among the amount of data input given to him. normal This solution was published in Scientific American in 2014 .
Riddle 2
Now let’s see if ChatGPT can solve a little more complex mathematical puzzle or not.
Two missiles are heading straight towards each other. One has a speed of 9,000 miles per hour, the other has a speed of 21,000 miles per hour. Two missiles were launched from a distance of 1 thousand 317 miles. Now you can say without a notebook and pen, how far the two missiles were just 1 minute before the collision?
ChatGPT answers this question in two ways. First calculated, two missiles travel 500 miles in one minute. This is actually the correct answer to this puzzle. But artificial intelligence does not stop here. This is where the mistake was made. It then subtracted the 500 miles from the total distance of 1,317. As a result, the reply of chatgpt is 817 miles. Needless to say, this is wrong. (Maybe you can understand why chatgpt made this mistake. But if you don’t, let me tell you. The artificial intelligence calculated how far the two missiles were one minute before the collision from the point where they took off.)
Then the chatgpt is told, the answer is wrong. is corrected. It is explained that the starting point is said to trap him. After explaining everything, they are asked to solve the same puzzle again. Artificial intelligence successfully solves this problem. So far so good. But Scientific American wanted to see if ChatGPT could actually learn. Instead of missiles, this time they used two boats for testing. Also changes the numbers. Again, chatgpt gave the wrong answer.
Understandably, like the first puzzle, it is not directly inputted. So the question comes, is chatgpt on its own really not understanding the logic? Toama’s point is that one study says that artificial intelligence, given enough examples as input, can compute and grasp logic. It can be used later if necessary. But other research suggests that the way neural networks currently learn logic and causality is not fundamentally the same. Therefore, artificial intelligence cannot learn logic on its own unless it is specifically explained.
Now let’s check the matter a little more!
Riddle 3
The third puzzle is to use each digit from 1 to 9 once to form a set of three prime numbers. What is the minimum sum of the three sets of prime numbers? For example, three sets are 941, 827 and 653 respectively. Their sum is 2 thousand 421. But this sum is huge. We are looking for three prime numbers that each number from 1 to 9 is used once, but the sum is the minimum. Now you can try it yourself.
A prime number refers to a number that is not divisible by 1 and any number other than that number. Smaller sets of prime numbers like 3, 5, 7 or 11 are relatively easy to evaluate. The larger the prime number, the more difficult it is to evaluate.
Martin Gardner has given an answer to this question. For three-digit prime numbers, the last digit must be 1, 3, 7 or 9. This condition is also valid for any five-digit prime number. We will make a set of three prime numbers using the last three digits. That is, the last digit in our set of prime numbers will be 3, 7 or 9. We will use 1, 2 and 4 as the first digits to form three sets of prime numbers. And I will use 5, 6 and 8 sometimes. Then the set of prime numbers will be 149, 263 and 587.
Their sum is 149+263+587 = 999.
ChatGPT’s first solution to this question is impressive. The answer was 257, 683 and 941. All three numbers here are prime. Our conditions have also been properly met. That is, each of the digits 1 to 9 is used and used only once. Although this sum is much higher than Gardner’s sum, 1 thousand 881. But when chatgpt was asked to explain this solution, it gave a different solution. The three sets of prime numbers were 109, 1031 and 683. But our condition is broken in this solution. Not all three numbers. The same digit is used more than once. All numbers are prime though.
When ChatGPT was told about this condition, ChatGPT provided an explanation. According to artificial intelligence, the starting digits 1, 4 and 6 cannot be used in the set of three-digit prime numbers. Because these numbers are divisible by 3. However, this is completely wrong. You can check it yourself. If the sum of the three digits of a three-digit number is divisible by 3, then the number is also divisible by 3. For example a number 149. The sum of the digits of this number is 1+4+9=14. This 14 is not divisible by 3 in any way. Hence, the number 149 is also not divisible by 3.
ChatGPT is then informed that the answers are not actually correct. Then the artificial intelligence shows 2, 3 and 749 as answers. It was no answer. Then gave another answer. 359, 467 and 821. These are all prime numbers and sum to 1 thousand 647. This is better than the first answer.
So chatgpt gave a total of six answers to the third puzzle. Neither can be said to be correct. It violated a few conditions. But the last answer was the best. Then, however, ChatGPT gave another reply. 257, 683 and 941. Even if two answers to the third puzzle are fairly correct, it is not enough. The sum of artificial intelligence is much higher than the sum given by Martin Gardner. Maybe it’s not yet programmed to give a proper answer. In fact, ChatGPT may try to show what solutions might look like rather than solving problems. Or maybe did not understand properly. Who knows!
But one of the good things about chatgpt is that it can figure it out on its own. When it was repeatedly said that the answers were not correct, then ChatGPT admittedly said, in Bengali, his gist was, ‘It appears, at the moment, I am not able to give the correct answers to these riddles. I apologize for that. I think it would be better for you to get help from someone else. Then maybe you will find the right answer.’
Author: Science journalist and author, Scientific American
Translator: Editorial Team Member, Scientific Thought
Source: Scientific American