Releases: brainpy/BrainPy
Version 2.2.4.0
This release has updated many functionalities and fixed several bugs in BrainPy.
New Features
- More ANN layers, including
brainpy.layers.Flattenandbrainpy.layers.Activation. - Optimized connection building for
brainpy.connectmodule. - cifar dataset.
- Enhanced API and Doc for parallel simulations via
brainpy.running.cpu_ordered_parallel,brainpy.running.cpu_unordered_parallel,brainpy.running.jax_vectorize_mapandbrainpy.running.jax_parallelize_map.
What's Changed
- add Activation and Flatten class by @LuckyHFC in #291
- optimizes the connect time when using gpu by @MamieZhu in #293
- datasets::vision: add cifar dataset by @hbelove in #292
- fix #294: remove VariableView in
dyn_varsof a runner by @chaoming0625 in #295 - update issue template by @chaoming0625 in #296
- add multiprocessing functions for batch running of BrainPy functions by @chaoming0625 in #298
- upgrade connection apis by @chaoming0625 in #299
- fix #300: update parallelization api documentation by @chaoming0625 in #302
- update doc by @chaoming0625 in #303
New Contributors
- @LuckyHFC made their first contribution in #291
- @MamieZhu made their first contribution in #293
- @hbelove made their first contribution in #292
Full Changelog: V2.2.3.6...V2.2.4
Version 2.2.3.6
- fix bifurcation analysis bug
- fix synaptic delay bug
Version 2.2.3.5
fix `parameter()` bug (#286) fix `parameter()` bug
Version 2.2.3.4
New features
- This release removes the
extensionspackage, and deploys it as a standalone repository as brainpylib. - Initializing
brainpy.math.random.RandomStatewithseed_or_key, rather thanseed. - APIs in
brainpy.measuresupportsloopandvmapmethods, the former is memory-efficient, and the later is faster. - DNN layers are revised and are all useable.
- Upgrade operators to match
brainpylib>=0.1.1 brainpy.math.pre2post_event_sumsupports atuodiff (including JVP, VJP), it can be used for SNN training.
Full Changelog: V2.2.3.3...V2.2.3.4
Version 2.2.3.3
fix delay update bug (#281 fix delay update bug
Version 2.2.3.2
This release continuously improves the functionality of BrainPy
New features
- Add
brainpy.measure.unitary_LFP()for calculating LFP from neuronal spikes
>>> import brainpy as bp
>>> runner = bp.DSRunner()
>>> runner.run(100)
>>> lfp = bp.measure.unitary_LFP(runner.mon.ts, runner.mon['exc.spike'], 'exc')
>>> lfp += bp.measure.unitary_LFP(runner.mon.ts, runner.mon['inh.spike'], 'inh')- Add
brainpy.synapses.PoissonInputmodel
>>> bp.synapse.PoissonInput(target_variable, num_input, freq, weight)- Upgrade brainpy connection methods, improving its speeds. New customization of brainpy
Connectorcan be implemented through
class YourConnector(bp.conn.TwoEndConnector):
def build_csr(self):
pass
def build_coo(self):
pass
def build_mat(self):
passImprovements
-
Support transformation contexts for
JaxArray, and improve the error checking of JaxArray updating in a JIT function. -
Speedup delay retrieval by reversing delay variable data.
-
Improve the operator customization methods by using Numba functions.
-
Fix bugs in GPU operators in
brainpylib.
What's Changed
- Docs: add compile_brainpylib documentation by @ztqakita in #270
- add
PoissonInputmodel andunitary_LFP()method by @chaoming0625 in #271 - organize brainpylib for future extensions by @chaoming0625 in #272
- Update lowdim analyzer by @ztqakita in #273
- speedup connections in One2One, All2All, GridFour, GridEight, and others by @chaoming0625 in #274
- consistent brainpylib with brainpy operators by @chaoming0625 in #275
- Fix test bugs by @chaoming0625 in #276
- Fixed setup mac script by @ztqakita in #278
- JaxArray transformation context by @chaoming0625 in #277
- speedup delay retrieval by reversing delay variable data by @chaoming0625 in #279
- Updating apis for connections and operation registeration by @chaoming0625 in #280
Full Changelog: V2.2.3.1...V2.2.3.2
Version 2.2.3.1
This release fixes the installation on Windows systems and improves the installation guides in the official documentation and installation process.
The following example shows how to install jaxlib after users install and import brainpy:
>>> import brainpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\adadu\miniconda3\envs\py3test\lib\site-packages\brainpy\__init__.py", line 10, in <module>
raise ModuleNotFoundError(
BrainPy needs jaxlib, please install jaxlib.
1. If you are using Windows system, install jaxlib through
>>> pip install jaxlib -f https://whls.blob.core.windows.net/unstable/index.html
2. If you are using macOS platform, install jaxlib through
>>> pip install jaxlib -f https://storage.googleapis.com/jax-releases/jax_releases.html
3. If you are using Linux platform, install jaxlib through
>>> pip install jaxlib -f https://storage.googleapis.com/jax-releases/jax_releases.html
4. If you are using Linux + CUDA platform, install jaxlib through
>>> pip install jaxlib -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
Note that the versions of "jax" and "jaxlib" should be consistent, like "jax=0.3.14", "jaxlib=0.3.14".
More detail installation instruction, please see https://brainpy.readthedocs.io/en/latest/quickstart/installation.html#dependency-2-jax
Hope this information may help the installation of BrainPy much easiler.
Version 2.2.3
This release continues to improve the usability of BrainPy.
New Features
- Operations among a
JaxArrayand a NumPyndarrayin a JIT function no longer cause errors.
>>> import numpy as np
>>> import brainpy.math as bm
>>> f = bm.jit(lambda: bm.random.random(3) + np.ones(1))
>>> f
JaxArray([1.2022058, 1.683937 , 1.3586301], dtype=float32)- Initializing a
brainpy.math.Variableaccording to the data shape.
>>> bm.Variable(10, dtype=bm.float32)
Variable([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)LengthDelaysupports a new method called"concatenate"which is compatible with BP training.
>>> delay = bm.LengthDelay(bm.ones(3), 10, update_method='concatenate')
>>> delay.update(bm.random.random(3))
>>> delay.retrieve(0)
DeviceArray([0.17887115, 0.6738142 , 0.75816643], dtype=float32)
>>> delay.retrieve(10)
DeviceArray([0., 0., 0.], dtype=float32)Note that compared with the default updating method "rotation", this method can be used to train delay models with BP algorithms. However, "concatenate" has a slower speed for delay processing.
- Support customizing the plotting styles of fixed points. However, there is still work to support flexible plotting of analyzed results.
>>> from brainpy.analysis import plotstyle, stability
>>> plotstyle.set_plot_schema(stability.SADDLE_NODE, marker='*', markersize=15)Full Changelog: V2.2.2...V2.2.3
What's Changed
- Update installation info and delay apis by @chaoming0625 in #263
- Support initializing a Variable by data shape by @chaoming0625 in #265
- operations with JaxArray and numpy ndarray do not cause errors by @chaoming0625 in #266
- Update
VariableViewand analysis plotting apis by @chaoming0625 in #268
Full Changelog: V2.2.2...V2.2.3
Version 2.2.2
Bug Fixes
This release fixes several bugs in the BrainPy system, including:
- The jitted functions in
brainpy.measuremodule no longer exists when they are cleared bybrainpy.math.clear_memory_buffer(). - The bug for
clear_input()function. - The bug for the monitor in
brainpy.integrators.IntegratorRunner
What's Changed
- update loop docs and apis by @chaoming0625 in #261
- fix some bugs by @chaoming0625 in #262
Full Changelog: V2.2.1...V2.2.2
Version 2.2.1
This release fixes bugs found in the codebase and improves the usability and functions of BrainPy.
Bug fixes
- Fix the bug of operator customization in
brainpy.math.XLACustomOpandbrainpy.math.register_op. Now, it supports operator customization by using NumPy and Numba interface. For instance,
import brainpy.math as bm
def abs_eval(events, indices, indptr, post_val, values):
return post_val
def con_compute(outs, ins):
post_val = outs
events, indices, indptr, _, values = ins
for i in range(events.size):
if events[i]:
for j in range(indptr[i], indptr[i + 1]):
index = indices[j]
old_value = post_val[index]
post_val[index] = values + old_value
event_sum = bm.XLACustomOp(eval_shape=abs_eval, con_compute=con_compute)- Fix the bug of
brainpy.tools.DotDict. Now, it is compatible with the transformations of JAX. For instance,
import brainpy as bp
from jax import vmap
@vmap
def multiple_run(I):
hh = bp.neurons.HH(1)
runner = bp.dyn.DSRunner(hh, inputs=('input', I), numpy_mon_after_run=False)
runner.run(100.)
return runner.mon
mon = multiple_run(bp.math.arange(2, 10, 2))New features
- Add numpy operators
brainpy.math.mat,brainpy.math.matrix,brainpy.math.asmatrix. - Improve translation rules of brainpylib operators, improve its running speeds.
- Support
DSViewofDynamicalSysteminstance. Now, it supports defining models with a slice view of a DS instance. For example,
import brainpy as bp
import brainpy.math as bm
class EINet_V2(bp.dyn.Network):
def __init__(self, scale=1.0, method='exp_auto'):
super(EINet_V2, self).__init__()
# network size
num_exc = int(3200 * scale)
num_inh = int(800 * scale)
# neurons
self.N = bp.neurons.LIF(num_exc + num_inh,
V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.,
method=method, V_initializer=bp.initialize.Normal(-55., 2.))
# synapses
we = 0.6 / scale # excitatory synaptic weight (voltage)
wi = 6.7 / scale # inhibitory synaptic weight
self.Esyn = bp.synapses.Exponential(pre=self.N[:num_exc], post=self.N,
conn=bp.connect.FixedProb(0.02),
g_max=we, tau=5.,
output=bp.synouts.COBA(E=0.),
method=method)
self.Isyn = bp.synapses.Exponential(pre=self.N[num_exc:], post=self.N,
conn=bp.connect.FixedProb(0.02),
g_max=wi, tau=10.,
output=bp.synouts.COBA(E=-80.),
method=method)
net = EINet_V2(scale=1., method='exp_auto')
# simulation
runner = bp.dyn.DSRunner(
net,
monitors={'spikes': net.N.spike},
inputs=[(net.N.input, 20.)]
)
runner.run(100.)
# visualization
bp.visualize.raster_plot(runner.mon.ts, runner.mon['spikes'], show=True)