The Philosophy of Bonding
The
debate on explanations on the famous saying of Heraclitus saying no one crosses the same
river twice. It indicates changing waters or the changing times or even the
changing experiences. But once a man commits an error on a situation thereafter
in a similar situation, he will not commit the same error.
More a man knows
more he moves towards the unknown.
Mind is a unique organ
that can never be completely understood by its keeper. All experiences are eventfully coded by the mind,
the funniest things in bonding all these codes. If the mind is asked to bond glass
tumbler and water, then it easily fits a picture of a water filled glass
tumbler. When mind is asked to bond a hand full of jasmine flower it simply bonds
a handful of jasmine flowers with its fragrance and color.
In the process of
bonding the mind tries to set a pattern already experienced or set a new
pattern that will be logically correct and understandable. For instance, mind
has experiences wind and mind has also experienced spider web. When the mind is
asked to bond spider web and the wind this is a challenge. Mind has never
experienced a situation where both the wind and spider web were connected. This
throws a discomfort is trying to collaborate the wind and the spider web.
Both the data of
the wind and the spider web are part of the data in the mind but yet the collaboration
process is not easily achieved. Mind may know a data but still may not know it.
This is a paradoxical situation. When the same task is pushed in to Artificial
Intelligence the task is well executed by giving an output. Here what the mind
could not do is executed by the AI.
Mind always rely
on logical connectivity and banks on a good quantum of experience data. But in Artificial
Intelligence this need not be present. It was giving an output which may not
satisfy everything logically and it may not have experienced it either.
It was a unique experience
to request AI to generate images. The task depended on the prompt. A
situational prompt was presented and was tagged to “classic painting” it did
give a good output. To take the task further the prompt was changed from “classical
painting’ to ‘Philosophical painting” or “Musical painting” or “Poetic painting”
the output was not the same in all three.
Does these tasks exhibit a limit to the perception formation of the logical mind, if yes, then
is not the AI scenario better in task orientation than the mind. AI is capable
of connecting two non-related things to give an output which may or may not be
logically correct, nevertheless, it completed the task. But when the same task
is given to the mind it fumbles to give a logical output. Is the fear of giving
an output of two unrelated things based on logical perfection?
Human mind will
not cross the same river twice, but AI will not hesitate to cross.