-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NeuroML for Docker (Java issue) #232
Comments
NeuroML requires Java (since JNeuroML is a Java package). Depending on what your docker container is based on, you will need to include a JVM in it. Here's an example: |
Hello Ankur - thank you! I was able to get it to work with the following interactive sequence:
(beginning with the tensorflow/tensorflow:latest-jupyter image)
docker pull tensorflow/tensorflow:latest-jupyter
docker run -it -p 8888:8888 tensorflow/tensorflow:latest-jupyter bash
# this puts me in a bash shell but does not yet start jupyter
# now inside the shell do these before starting python
pip install pyneuroml neuromllite NEURON
pip install install-jdk
pip install scyjava
# now start python temporarily
# the following installs the jdk and jvm into /root/.jdk/jdk-17.0.13+11
python
import jdk
jdk.install('17') # or whatever version works best, paths below are for 17
exit()
# back in bash shell - must add the jdk bin to make the jvm accessible
export PATH=$PATH:root/.jdk/jdk-17.0.13+11/bin
# must set JAVA_HOME to find the shared library
export JAVA_HOME=/root/.jdk/jdk-17.0.13+11
# verify java has been successfully installed
java -version
# now resume python
python
from scyjava import jimport
# verify java is accessible from inside Python
System=jimport('java.lang.System')
// continue with neuroml instructions
>> from neuroml import NeuroMLDocument ... etc
The simulation works, it creates the .png output file, and now start Jupyter and run the whole thing again, and it creates a nice plot.
These instructions (or whatever best way) would be helpful to post on the NeuroML web site. A page dedicated to simple Docker instructions would get some customers.
There is still a big gap between NeuroML and TensorFlow, but the gap is closing rapidly. Ca++ based dendritic spiking is interesting to the ML types, it can't really be done in native Tensorflow without horrible mangling.
For my purposes I'd like TTFS and two types of STDP, with separate computations for separate purposes.
NeuroML could get customers by getting ahead of this equation, otherwise Tensorflow is going to come up with its own half-baked implementation and it'll hurt the actual scientific research. ("In my opinion").
Thank you for your help and your quick reply! Nice to meet you. Here is a quick description of a model you may find interesting:
<==== evoked potentials <===== (T=0) <===== premotor potentials <=====
1.
Generate a timeline from evoked potentials and premotor potentials, centered at the point "now" (call it T=0, so the brain becomes a window moving through time). In ML terminology this might be a stack from right to left, with sensory information entering at T=0.
2.
Map the reference frame from physical time t to "egocentric" time T, which requires a change in direction, to get "future" and "past" relative to T=0. Note there is a singularity at T=0 because of conduction delays - but the purpose of the brain is to optimize behavior "in real time", therefore it has to cover the singularity. (T is a "representation" of information moving through the network).
3.
Compactify the timeline by joining the ends. ("Alexandrov 1-point compactification"). This creates the equivalent of a reflex loop, but at multiple levels of resolution, somewhat similar to changing the window size in time series analysis.
4.
By taking the limit as dT=>0 and overlaying the projective maps, the result is a "Hawaiian earring" construction (see below), where the point at infinity overlays T=0. Taking a vertical slice through the topology overlays all points at infinity.
5.
Done this way, the singularity at T=0 "unfolds" into the projective space generated by the Hawaiian earring.
6.
One way to access this topology (for example) is with an embedding layer over a convolutional stack. It works because of the random sampling times of the neurons, little bits of each diameter get updated every time a neuron fires.
This is a developmental model, in other words the convolutional stack has to be trained "first" before the embedding layer can usefully become a global associative memory. However the various diameters end up interacting with each other, generating a topological "covering" of the neighborhood of T=0 "in the limit". The result is a continuous egocentric reference frame.
This model dovetails perfectly with ongoing research and it makes specific predictions about what we can expect to find in the "backward flows" for example from hippocampus to frontal cortex. As drawn above, the hippocampus is on the far left of the timeline and the prefrontal cortex is on the right.
Note that this construction gives us forward and backward memory replay for free, it happens naturally as a result of the projective mapping. You can see this by tracing around any of the diameters and then projecting back onto the linear timeline from the posterior half of the circle.
The added benefit of this geometry is it handles conformal mapping for free, complex coordinates become accessible in the neural spike trains. (We can easily prove this with a bit of dynamic analysis).
*
Brian
[9: The Hawaiian earring | Download Scientific Diagram]
…________________________________
From: Ankur Sinha ***@***.***>
Sent: Tuesday, November 19, 2024 6:27 AM
To: NeuroML/NeuroML2 ***@***.***>
Cc: furballproductionsinc ***@***.***>; Author ***@***.***>
Subject: Re: [NeuroML/NeuroML2] NeuroML for Docker (Java issue) (Issue #232)
NeuroML requires Java (since JNeuroML is a Java package). Depending on what your docker container is based on, you will need to include a JVM in it. Here's an example:
https://github.com/NeuralEnsemble/neuralensemble-docker/blob/13434295590ef0eb75ef47ac9421847d5ffd2bf3/osb/Dockerfile#L33
—
Reply to this email directly, view it on GitHub<#232 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BBYOATZSXZFUSCYEILCXUSD2BNDFLAVCNFSM6AAAAABR6ZXV6GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBVHA3DQMZRGE>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is your feature request related to a problem? Please describe.
The introductory Izhikevich example works fine in a Docker container until the LEMS simulation, then it looks for a JVM and can't find one (I'm running inside a Jupyter notebook with a Python console on Windows 10 with WSL2).
Describe the solution you'd like
It would be helpful to have a Dockerfile or instructions that will make this work.
Describe alternatives you've considered
I tried the install-jdk tool in Python, it works insofar as you can cd into the bin directory and execute 'java -version' but it complains about libjvm.so, which exists in the lib/server subdirectory of the jdk install but which can't be found even with the path set.
Additional context
It would be yummy to make this work in the same container that TensorFlow uses. (hint hint)
I'm happy to collaborate with anyone who'd like to help make this work. - Brian [email protected]
The text was updated successfully, but these errors were encountered: