Internet | New Media | Start-Ups | Personal Technology | Wireless | Computing | Engineering | CEOs

SILVIA: Artificial Intelligence Platform

More videos from this partner:

9
Likes
0
Dislikes
RATE

  • Info
  • Bio
  • Chapters
  • Preview
  • Download
  • Zoom In
Advertisement
There are 10 comments on this program

Please or register to post a comment.
Previous FORAtv comments:
Student4Life1975 Avatar
Student4Life1975
Posted: 05.27.11, 07:28 AM
i think AI is a legitimate topic that really needs to be explored. if it is to be created, hopefully its on a level that cannot effect anything outside of its immediate realm. meaning, not in a position to do harm if it "decides" to. an example would be putting hitler in a bank vault. outside the vault he could potentially do quite a bit of harm to others, but kept inside, depsite his abilities and intelligence, he would be quite harmless to all those outside the vault, regardless of his intentions. anyways, perhape AI would be no threat at all, simply because it may lack the one component that us humans seem to posess, which is human EGO. Ego is largely, if not solely responsible for every "evil" thought we have towards eachother, and if AI were lacking that component, what would be the results?
Periergeia Avatar
Periergeia
Posted: 06.24.10, 09:13 PM
Watching the "demo" on the cognitive corporation website, I had to laugh... Silvia either lies all the time or it is horribly "stupid"... For instance, being asked about the "things it likes", it pretends to "like cars" because "they sound like a lot of fun". The correct answer, of course, would have been "I am not a sentient being, yet, therefor, I can not "like" things.". I think I have found multiple examples of this "behavior", which, of course, are not Silvia's fault. She is not a sentient being, therefor, she does not really "lie". Her makers, of course, do not pretend that she is a sentient being. They do pretend, though, that she can give useful answers. Which, in this case, she does not. Truthfully, I believe, the system merely shows that it is nothing more and nothing less than a chatterbot which has an extensive database of prerecorded answers. It does not seem to reflect on the information presented to it, or, if it does in some limited ways, it readily returns the wrong answer. Which, in human terms, would make it "stupid". From what I have seen, it does not even understand the information presented to it. It does a much better job than Eliza and most of her successors to deflect from the fact that there is absolutely no intelligence in the system, whatsoever, but that is all it does. What concerns me, though, is that the programmers of Silvia are not willing to admit her limitations or her true nature and have therefor programmed answers into the system which would not be given by an expert system that correctly analyzes the question as a reference to itself and then deduces that the mixture of human traits like "likes" and "after work pleasures" and machine traits like the size of the knowledge data base does not lead to an answer that can pretend to be human. Sadly, honesty is known to be one of the easiest of AI problems. Every automatic theorem prover is, by design, completely honest. It says either that the answer is "true" and can deliver the formal proof on request, it says that the answer is "false" or, if the algorithm can do neither, it returns a sound "I don't know". All of these are perfectly truthful answers to a perfectly solvable AI problem... one that Silvia obviously does not even attempt to solve. Now, one has to ask oneself, why would one want to interact with a system of pretense to begin with? A typical application in e.g. call centers involves a highly complex task: to solve problems that the database used by the call center personal is often not made for. I had several such encounters lately, and the difference in call quality were all based on the level of inventiveness of the call center person to work around the limitations of their system. Obviously, Silvia, having no operational intelligence, can not do that. It would have to fold and connect to a human operator as soon as it was asked something that it wasn't programmed for. That, of course, can be done just as easily with a fixed menu system. Moreover, a fixed menu system with a pre-programmed maximal depth assures that the customer's patience is never taxed beyond the well established psychological limits. The latter is much harder to achieve with a system that has tons of smart-ass answers pre-programmed that it will unload on the naive user who tries to make it do something that it simply can't. The problem I see is that Silvia can not even detect when it can not give a truthful answer... instead, it keeps pretending that it might be able to help... all at the expense of the user.
devan Avatar
devan
Posted: 08.04.09, 09:02 PM
Can someone help me on getting intouch with Cognitive Code Corporation, I had send out and email to info@cognitivecode.com . But the email had return undelivered. Kindly advice me if there are other way to contact them on development work. Devan Nair KIDZTV
mchen Avatar
mchen
Posted: 04.11.09, 10:58 PM
Hello, Thanks for the interest. Cognitive Code has been busy working with other companies to integrate SILVIA into their products and as such, the website takes a low priority. In being mindful of the current economy, we are keeping operations very lean. However, we'll try to put in an update soon. Mimi Chen Cognitive Code Corporation
giannis Avatar
giannis
Posted: 01.06.09, 04:18 AM
I recently found about the project and I wonder what is it current stage. Is it dead? The website http://www.cognitivecode.com/ appears dead and I can't see anywhere any news about it!
CO4E Avatar
CO4E
Posted: 03.27.08, 03:06 PM
Quote: Originally Posted by Cognitive In short, that sort of self-motivated, pattern recognizing autonomy is something intrinsic to the system, and we are finding that this is of more interest to certain strategic partners than even the application framework or the free-form conversational aspects of SILVIA. But since the topic of the talk was about "conversational intelligence" ... I'm working on model of complexity itself. I suspect that there are two ways of viewing mathematics. 1) as a relationship governed the "=" sign. 2) as an event elucidated by the "=" sign. In the first case irrational numbers can be applied to a given model of SpaceTime. In the second case because they can never resolve, there is no capacity to implement them. I'm interested in how you define intrinsic. In my model I have had to make a decision on concepts. I have decided that they have little to do with complexity. So the in language of the model's disclosure or elucidation I have had to define words quite simply. Cause, effect, form, function, innate, intrinsic, abstract, metabolism, experience, diversity, purity, expression, acquisition, hierarchy and several others - these are defined as opposed to described or simply used. Words that I have had to simply cast out are words like intellect , instinct, evolution, life, time(as per E=MC2), intelligence, system, natural, artificial, (ego id and the like), mind, thought, gravity, matter, truth - and on and on. Anyway I can't really get into the details unfortunately - or I may just give up the idea of publishing a finished work and simply put it on the net as I get it worked out. My partner and I are ambivalent about where to go with it after these last four years of brain numbing classifications and dialectic cleansing. Not to mention just simply a little unsettled about the results. Maybe this a cry for help?
Cognitive Avatar
Cognitive
Posted: 03.10.08, 11:01 PM
Quote: Originally Posted by crawshanty Kudos, Mr Spring! I have been waiting for the "churn" concept you talk about here. (The next one I'm waiting for is when pitch and tempo are taken into account by the speech recognition). From a robotics perspective, can SILVIA process other types of input besides speech. For example video or other sensor input? Can SILVIA output control signals of any kind? I'm sure I'm not alone in eagerly anticipating your products! Thanks so much. The "churn" aspect of the cognitive feedback component is fairly integral to SILVIA's ability to react contextually, so we're pretty excited about that too. Re. your questions: SILVIA can take input from just about any data source, in just about any format, so speech and text is only a small subset of what she can accept and interpret. On the output side, SILVIA can execute almost any sort of transaction. More specific to your question about output, I happen to have a set of SILVIA brains that wirelessly controls a robotic toy, giving SILVIA a bipedal "body" to play with. I also have her controlling various consumer electronics devices via IR. So with the platform's plugin architecture, modules and SILVIA brain files can easily be created in order to execute transactional interactions with almost any sort of system or device. Thanks again for watching!
crawshanty Avatar
crawshanty
Posted: 03.07.08, 03:13 PM
Kudos, Mr Spring! I have been waiting for the "churn" concept you talk about here. (The next one I'm waiting for is when pitch and tempo are taken into account by the speech recognition). From a robotics perspective, can SILVIA process other types of input besides speech. For example video or other sensor input? Can SILVIA output control signals of any kind? I'm sure I'm not alone in eagerly anticipating your products!
Cognitive Avatar
Cognitive
Posted: 03.01.08, 08:59 PM
SILVIA as an autonomous agent
Quote: Originally Posted by CO4E The term AI is overused. The only intelligence here, is that of the developer. This is just a fancy state machine. How come SILVIA has to be told that the lights are too low? See, a personal AI would have only one concern: anticipate the needs of its organic partner. AI has few needs after implementation. It needs a Source of uninterrupted power and it needs increasing capacity to perform as memory elongates. Thanks for watching, and you have some interesting points. During the talk, there were quite a few aspects of SILVIA that I only touched upon lightly. One of those aspects is the way that SILVIA, using a persistent cognitive feedback loop, working in conjunction with the context sensitivity algorithms, can predictively and proactively do things for the user without having to be told. The SILVIA core also has a good selection of methods, statistical and otherwise, for SILVIA to draw from in predicting what the user might want to do at any given time and under different contextual situations. In short, that sort of self-motivated, pattern recognizing autonomy is something intrinsic to the system, and we are finding that this is of more interest to certain strategic partners than even the application framework or the free-form conversational aspects of SILVIA. But since the topic of the talk was about "conversational intelligence" ...
CO4E Avatar
CO4E
Posted: 03.01.08, 12:30 PM
The term AI is overused. The only intelligence here, is that of the developer. This is just a fancy state machine. How come SILVIA has to be told that the lights are too low? See, a personal AI would have only one concern: anticipate the needs of its organic partner. AI has few needs after implementation. It needs a Source of uninterrupted power and it needs increasing capacity to perform as memory elongates.
Advertisement

Advertisement