Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thought pathways #11

Open
Glavin001 opened this issue Mar 11, 2015 · 8 comments
Open

Thought pathways #11

Glavin001 opened this issue Mar 11, 2015 · 8 comments
Assignees
Milestone

Comments

@Glavin001
Copy link
Member

Similar to Neural pathways except on a larger, more modular scale.

While neural pathways connect neurons throughout distant areas of the brain, Donna's thought pathways would represent traversals through thought handlers.

A thought handler is a function that transforms a thought entity, given it has an expected input and output data type that the handler supports processing. While the brain has clusters of neurons that connect to each other and eventually learn how to process information passing through them (neural network), a thought handler is like a pre-computed neural network that is specialized for processing certain type of data and can output another specific type of data. For instance, there are areas of the brain responsible for understanding speech. In Donna, those areas would be represented as individual thought handlers, such as Speech-to-Text handler that supports input type audio or speech and output type of text. Donna could have another thought handler for text to Intent, such as wit.ai.

A thought entity can be thought as a unit describing a chunk of related information that can be processed by Donna.

Sensory input ( Input entities ) will be received by Donna and then be converted to Thought entities that can be passed through thought pathways in the same way information in the human body's nervous system passes through the neurons in the brain. Graph traversal algorithms could be implemented to improve efficiency / precision of traversal from input entity through thought handlers. Neural networks could also be used to further optimize which thought handlers are used more often in different cases.

@Glavin001 Glavin001 self-assigned this Mar 11, 2015
@Glavin001 Glavin001 added this to the v0.2.0 milestone Mar 11, 2015
@Glavin001
Copy link
Member Author

Essentially making the InputEntity for raw data (right from the senses) and IntentEntity into ThoughtEntity and the IntentRouter would be the ThoughtRouter.

@Glavin001
Copy link
Member Author

Thought entities should have a field for Thought Handler History or something similar, that handlers would be added to if the Thought passed through them.

By including this, we could take a Thought at the end of it's life, such as when it is sent to Output, and then compare the meta data for the thought and the thought handler history and further optimize, such as with artificial neural networks.

There could also be a supervised learning mode for Donna that would allow the user to train this artificially intelligent thought router.

@Glavin001
Copy link
Member Author

Also utilize the confidence factor. Picture an Artificial Neural Network

ann

with each input flowing through the connections joining each of the nodes until it reaches the output.

Now consider that each of the hidden nodes represent a thought handler / process / transformer. In an ANN, each node has a coefficient. In the thought pathway, the coefficient reflects the bias of confidence for each thought handler (hidden node). Each thought handler should have their own confidence for their result given the input (thought entity), and that confidence is then influenced by their own coefficient as learned in the thought pathway network.

@Glavin001
Copy link
Member Author

Examples of situations for Donna to process:

  • Sense -> Sense/Input receiver plugin
  • Thought -> Thought handler plugin
  • Output -> Output handler plugin

Playing a YouTube video, requested by voice

"Play <song name here> on YouTube" (Sense) -> Speech-to-text (Thought) -> text-to-intent with Wit.ai (Thought) -> intent processing for intent play_youtube_video (Thought) -> Play video on YouTube (Output)

Detailed breakdown

  • "Play <song name here>" (Sense)
    • output thought entity data type: audio
  • Speech-to-text (Thought)
    • supported input thought entity data types: audio
    • supported output thought entity data types: text
  • text-to-intent with Wit.ai (Thought)
    • supported input thought entity data types: text
    • supported output thought entity data types: Intent
  • intent processing for intent play_youtube_video (Thought)
    • supported input thought entity data types: Intent
    • supported output thought entity data types: N/A
    • output entity data type: YouTubeVideo
    • Note that all handlers of Intent would receive this Thought entity however only those applicable to play_youtube_video should process it.
  • Play song on iTunes (Output)
    • supported input data types: YouTubeVideo

More coming soon

@Glavin001
Copy link
Member Author

Have an expiry date and date of last access attached to each of the Thought entities? Older thoughts can be pushed from short-term into long-term memory and read later.

@Glavin001
Copy link
Member Author

Thought entities should have keywords associated to them such that they can be "primed". Priming Priming is an implicit memory effect in which exposure to one stimulus influences a response to another stimulus. Donna is constantly receiving stimulus and priming is an important aspect of making sure she reacts appropritely.

@Glavin001
Copy link
Member Author

Consider integrating with Node-RED! https://github.com/node-red/node-red

@jaysnanavati
Copy link

This looks interesting (especially the idea of using Node-RED) - would like to contribute to this. How much work have you done on this and could you provide any pointers to get started?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants