A Simple Neural Network with NumPy
Using NumPy, and with guidance from neuralnetworksanddeeplearning.com and a few other tutorials, I created a simple neural network to learn a two-input/one-output function. The utility of this network doesn’t extend beyond an educational demonstration. The training function is:
Using three hidden layers of 40 neurons each, the network is fed two random inputs in (0,1) and the output from the training function is used to determine the error and back-propagate the error. The network is trained with 50,000 epochs of batch sizes of 10,000, after which, the network is able to recreate the output of the function. For copy-and-paste-ability, the code is presented here as a single file.
A Simple Neural Network with TensorFlow
I translated the above NumPy neural network into TensorFlow to learn the dataflow programming paradigm for machine learning. The training function, network parameters, and visualization code are identical to the NumPy model, while the differences are in the network setup (starting at line 66) and in the network training (starting at line 145). Both networks achieve approximately the same accuracy per epoch, while the TensorFlow model does it about 3x faster for this particular small-scale sample problem.
qtail.sh – a qsub wrapper with an interactive feel
When debugging HPC jobs using the portable batch system (PBS) scheduler, I often logged into a compute node for an interactive session. Unfortunately I found that the terminal window did not always render correctly, making interactivity difficult, and I did not have access to my .bashrc file since I would be put on a different node each time. To solve this I created a function called qtail, which is a portmanteau of the qsub and tail functions. Effectively a wrapper for qsub, qtail is run with the same syntax and options as qsub, but runs tail on the output files generated by qsub, and when the qtail function is terminated it terminates the corresponding PBS job (by grepping for the job number in the output of qstat) to enable quick debugging on the head node with the feel of debugging on an interactive node.
Binary collisions in CUDA
This is my CUDA implementation of the non-trivial task of pairing particles on the GPU so as to perform binary Coulomb collisions in parallel. In this implementation I also calculate the fusion reaction rate, during the particle-pairing process.