This guide is designed towards getting started on computational and theoretical neuroscience in the Marder Lab.
- You should use git to organize your code. There are lots of helpful tutorials on the internet if you don't know how to use git. You can also ask Srinivas or Alec.
- Sign up for a Gitlab account if you haven't already, and tell Srinivas to add your account to the Marder Lab GitLab organization.
- In general, your code for one project/purpose should exist on a Git repository on your computer, which should be synchronized with a repository on the Marder Lab GitLab.
- When you feel your code is ready to be shared with the world (for example, on publication of your paper), you should copy your code over to the Marder Lab Github. The reason we use Github here is for historical reasons, and the reason we use GitLab for ongoing code development is so that we can have private repositories.
- While our lab is all about open code and reproducible results, don't push work in progress to a publicly accessible git repository. Use a private repository on the Marder Lab GitLab instead. That's what it's for, and it serves a central place for code exchange among current lab members.
- Write code for humans, not machines. Code is meant to be read by humans and understood by humans, and running on silicon is only incidental.
- On the other hand, remember that code is also meant to be run on machines. Make sure your code doesn't run 100x slower than what is possible. Choose the best language for your task.
The lab maintains a formatted USB flash drive with Manjaro 17 on it. Octopi is the package manager utility for Manjaro. It is important to make sure that Octopi has installed the most recent versions of packages before attempting any complex job. This can be done by syncing and then installing in either the GUI or through the console.
The -S flag syncs the database and the -yu flag updates automatically with the "yes to all" option. You can search to see if a package is installed using
and to see if it is in the pacman database through
Brandeis licenses a great deal of useful software to its students and researchers. The complete list can be found through Library and Technology Services. In addition, Brandeis provides hardware and protocols to access a high-performance computing cluster.
If you ask him nicely, Steve Karel will provide you with a Windows 7/10 install disk.
MATLAB & Mathematica
MATLAB and Mathematica are both available from LTS. There is a student version for undergraduates and graduate students and a "staff/faculty" version for the rest. This does not affect the actual software, but the permissions Brandeis has purchased to license the software to you.
Xolotl is a fast single-compartment and multi-compartment simulator written in C++ with MATLAB wrappers. It simulates conductance-based models.
Psychopomp is a MATLAB toolbox to run xolotl simulations in parallel.
Chaos and Dusty can run Python, C++, and Julia without any overhead. Julia can be run online through juliabox.com or through Jupyter, the IJulia client.
NB: It is perhaps best run on the host computer through the Atom IDE using the Juno environment.
Julia can be downloaded as an executable for Windows, a .dmg file for macOS, or more preferably, a Linux binary (the "64-bit (X86)" version). The archive must be extracted and moved into the home folder on Linux.
If you are installing onto a Windows machine, download the .exe proper to your architecture (x86 or x64) and open it. Depending on the zealousness of your antivirus software, you may need to jump through a few hoops to get the program to open. Julia will automatically extract and prompt an installation into
C:\Users\Ahoyland\AppData\Local\Julia-X.Y.Z or something similar. If you want to run Julia from the command-line, you will need to access
C:\Users\Ahoyland\AppData\Local\Julia-X.Y.Z\bin\julia.exe. If you are using Linux, you can download a package (.deb or similar) that will autoextract for you. A better way would be through the command line. Open a Terminal and type:
Atom is the name of the IDE, the Julia package contains the actual binaries, and cmake is required to build Atom.
NB: You can install Julia and the 100+ most commonly used packages for science and engineering at https://juliacomputing.com/products/juliapro
Open Atom and use Ctrl + Shift + P and go to the "Install" panel. You will need only the "uber-juno" package. Atom will do the rest.
NB: In Windows systems, Atom sometimes has trouble finding the Julia executable. In the settings of the package
julia-client, which you installed through
uber-juno, put the complete filepath to the Julia executable in the "Julia Path" box (e.g.
Once Atom has installed the requisite packages, save the following script in wherever you want to run code from with the filetype ".jl" . Run it in Atom.
It goes without saying that in order to use PyPlot that Python must be installed. Most Linux distributions include Python. Setting the environment variable will allow Juno to access the Python distribution and wrap it through PyCall into your Julia commands. The Juno documentation contains some useful tricks (e.g. the progress() function) that makes coding easier and the Plots.jl package contains the best data visualization front-end available. You can update Atom through the package manager (e.g. pacman/octopi) and packages installed through Atom will prompt updates within the application. Julia is rapidly developing.
NB: You can always find the latest release and bleeding edge versions of Julia on the website.
Since Julia folders are tagged with a checksum, it might be necessary to manually rename the folders or go into the settings of the julia-client package within Atom and change the binary directory to reflect the new version of Julia.
In Juno, pressing the right-facing triangle "play" button will run an entire file. A single line can be run by Ctrl + Enter. Shift + Enter also runs a single line and moves the cursor to the next. If a line is part a code block, these commands will run the entire block. Ctrl + Shift + Enter runs an entire file. Running a line or block will output plots to the Plot Pane. Non-integrated visualization packages such as vanilla PyPlot will output as new windows with the gui() function. The Plots package supports multitudinous backends including PyPlot, GR, Gadfly, Plotly, PlotlyJS, and PGFplot. PGF is the most heavy-weight and is LateX-based. GR is the fastest. PlotlyJS contains the best functionality for the cost – it is simple to use.
Here are some examples of numerical simulations performed in Julia by Guillaume Drion.
Pyloric Type System
Here is an example of a system designed to numerically integrate the dynamics of the pyloric rhythm. The code is based on Prinz et al., 2004.
You can customize your plot using plot attributes. Plots.jl is designed to be language-agnostic, so that anyone familiar with the basics of programming does not need to fuss over the syntax of a language. For example, the following are equivalent.
This is one of the uses of Julia's multiple dispatch and dynamics types. For example, attributes which are on/off such as legend and grid accept booleans, but also can be modified through this means.
The Jupyter notebook is an open-source web application which allows for the creation of documents with live code, visualizations, and markup language. In particular to the Marder Lab, the Jupyter notebook environment is useful for utilizing the code written by Alex Sutton and Ted Brookings. It originally wsa developed as IPython for the dynamically typed language Python. The newest iteration of this endeavor is "Jupyter," short for "JUlia, PYThon, R".
Jupyter can run over 40 different programming languages; however, regardless of what you choose to use it for, Python is a requirement for installation. Project Jupyter emerged out of IPython, and so the source code is still build upon the Python framework. Any installation of Python 2.7 or 3.3 or greater will work. The best way to do a clean install of Python for Jupyter is through the Anaconda installer. Anaconda, as the name suggests, is a bigger, badder distribution of Python, and includes Jupyter in the installation. To run the notebook, open the terminal or command line and type:
This will print some information about the server to the terminal/command prompt, none of which is particularly important, save for the local host where the web application is running. Some useful command-line arguments include:
Accessing a specific port
Jupyter opens into your default web browser. To make it not do that
For general help, you can flag with the argument
Securing a Server
If you are accessing your notebook in an insecure location, you should consider modifying
NotebookApp.password setting within the config file (inside of
jupyter_notebook_config.py. In the off-chance that you do not have the proper config file, you can create one with
It is also possible to use a hashed password as of version 5.0 of Jupyter and secure socket layer (SSL) protection for encrypted communication. If you are going to use one of these, you should use both so that your password is not leaked over your browser.
A notebook file is saved like any other on you hard drive. You can create a new one from the dashboard or by going to File > New from within an open notebook. Functionally, this is a lot like any other web-based document-processing like Word Online or Google Drive. In order to run live code, you'll need an active kernel. For use with Guillaume Drion or Alec Hoyland's code, you will need a Julia kernel. To use Alex Sutton's programs, you will need a Python kernel, which should already be there. Active kernels should have a
Shutdown option whereas notebooks without an active kernel have a
Delete button instead. Starting a kernel prints the kernel's ID to the terminal. You can show it again by using the
%connect_info magic. This is useful if you are trying to connect other consoles to the same kernel, but is generally not important of which to keep track. More notably, the kernel doesn't restart when the session is closed. That means that all initialized variables will remain so until the kernel is forcibly restarted.
Jupyter notebooks consist of code cells which allow you edit and write new code. You can choose via a drop-down menu to make these cells into Markdown cells which utilize the Markdown syntax for full syntax highlighting, or into Code cells which run off of the kernel. In particular Python, Julia, and R are well-supported.
Markdown cells give you a lot of freedom on how to format your rich text documentation. For example, you can use 1 to 6 hash # signs to indicate headings. You can use * italics * and ** bold ** and $...$ for inline mathematics and $$...$$ for displayed mathematics (in LaTeX). If you want to write raw output, you can use nbconvert syntax to write, LaTeX for example, and not have it convert until done so by nbconvert.
There are a lot of really great shortcuts for Jupyter that are helpful to know. Some of my favorites include
Esc-a which adds a cell above and
Esc-b which adds a cell below. You can substitute
Esc. You can interrupt the kernel with
Ctrl-m i and restart with
Shift-Enter runs a cell jumps to the next cell.
Ctrl-Enter runs the cell in-place.
Alt-Enter runs the cell and adds a new cell below (i.e.
Shift-Enter, Ctrl-m a). There's a good list of tips and tricks here. Adam Pritchard has compiled a Markdown cheatsheet here.
You can sign a notebook with
Basic Morphological Analysis
If you have an
.nml file from Knossos or similar, you can convert it.
This tutorial assumes you have a file traced in Knossos (or similar). The file has a list of nodes (x,y,z,radius) that are identified by a node id (int) and a list of edges connecting the nodes. The code can be easily adapted for other generic file types. Knossos is fast, free, lightweight and easy to use. It's available at www.knossostool.org.
Depending on how large your skeleton file is, this process could take a few minutes. The goal is a .hoc file, which consists of nodes that make up segments and a connectivity matrix of those segments. A hoc file is handy because the simulator NEURON (www.neuron.yale.edu) reads them natively and Ted and I developed toolboxes that work easily with hocs. The flow of the pipeline is as follows:
- Convert xml (nml) file to a hoc
- Remove accidental loops
- Scale the coordinates
- Load the file as a geometry object
- Look at example morphological features
The tutorial.ipynb file written by Alex Sutton and Ted Brookings is a good place to start for morphological analysis. It is not by any means inclusive of all the possible analyses within the quantitative morphology modules written by Alex and Ted. As an aside, the manuscript Sutton et al., 2016 does not exist within the files and does not exist within the hard disk off Buyer's Remorse.
The first step is to obtain the relevant code. You can either clone
or download and decompress the .zip files. Both methods are accessible from Alex Sutton's githhub account (acsutt0n). Then import the packages.
You will need the
.xml file within your path. If it is not there already, run
os.chdir("my/full/path/to/my/directory") first. You can check to see what directory you are in with
Then you will need to convert to a .hoc file.
The coordinates need to be scaled to the voxel size. You will need to input the x,y,z coordinates of the voxel size.
Once you have the geometric object, you can perform almost any analysis you would like. For example, you could look at the shape of the neuropil.
NB: use the function
neuropil_fit()instead of quadratic/elliptically fitting the neuropil
For the Sake of Efficiency
If used the right way, Julia is half as fast as C/C++ and ten times easier to use. If used the wrong way, it is marginally faster than Java or Python. Dahua Lin has done some fine work to make Julia still user-friendly without sacrificing its potential for speed. Here is a paper from SIAM which discusses good use of Julia.
Many popular dynamic languages were not designed with the goal of high performance in mind. After all, if you wanted really good performance you would use a static language, or so said the popular wisdom. Only with the increasing need in the day-to-day life of scientific programmers for simultaneous productivity and performance has the need for high performance dynamic languages become pressing. Unfortunately, retrofitting an existing slow dynamic language for high performance is almost impossible, specifically in numerical computing ecosystems. This is because numerical computing requires performance-critical numerical libraries, which invariably depend on the details of the internal implementation of the high-level language, thereby locking in those internal implementation details. For example, you can run Python code much faster than the standard CPython implementation using the PyPy just-in-time (JIT) compiler, but PyPy is currently incompatible with NumPy and the rest of SciPy.
In short, using MATLAB for data visualization with your core code written in C++ is probably the best (10x faster than Guillaume's code), but Julia or SciPy are better if you don't know C++ already or aren't going to cannibalize some other Marder Lab C++ scripts.
Tim O'Leary writes excellent C++ scripts which compile inside MATLAB using mex. Alex Sutton wrote the morphological analysis suite in Python via Jupyter. You can get Python through Anaconda, as noted above. You can use the
gpp-compiler package for Atom to run C++ there. Everything else should already be set up, if you installed Atom through a traditional method.