Skip to content

Releases: brainpy/BrainPy

Version 2.2.4.0

29 Nov 04:03
af185c5

Choose a tag to compare

This release has updated many functionalities and fixed several bugs in BrainPy.

New Features

  1. More ANN layers, including brainpy.layers.Flatten and brainpy.layers.Activation.
  2. Optimized connection building for brainpy.connect module.
  3. cifar dataset.
  4. Enhanced API and Doc for parallel simulations via brainpy.running.cpu_ordered_parallel, brainpy.running.cpu_unordered_parallel, brainpy.running.jax_vectorize_map and brainpy.running.jax_parallelize_map.

What's Changed

New Contributors

Full Changelog: V2.2.3.6...V2.2.4

Version 2.2.3.6

08 Nov 14:24
1e9c8c2

Choose a tag to compare

  • fix bifurcation analysis bug
  • fix synaptic delay bug

Version 2.2.3.5

05 Nov 03:21
147d3e8

Choose a tag to compare

fix `parameter()` bug (#286)

fix `parameter()` bug

Version 2.2.3.4

04 Nov 05:24
f3e7d72

Choose a tag to compare

New features

  1. This release removes the extensions package, and deploys it as a standalone repository as brainpylib.
  2. Initializing brainpy.math.random.RandomState with seed_or_key, rather than seed.
  3. APIs in brainpy.measure supports loop and vmap methods, the former is memory-efficient, and the later is faster.
  4. DNN layers are revised and are all useable.
  5. Upgrade operators to match brainpylib>=0.1.1
  6. brainpy.math.pre2post_event_sum supports atuodiff (including JVP, VJP), it can be used for SNN training.

Full Changelog: V2.2.3.3...V2.2.3.4

Version 2.2.3.3

18 Oct 13:45
daf5002

Choose a tag to compare

fix delay update bug (#281

fix delay update bug

Version 2.2.3.2

18 Oct 12:45
87c0b86

Choose a tag to compare

This release continuously improves the functionality of BrainPy

New features

  1. Add brainpy.measure.unitary_LFP() for calculating LFP from neuronal spikes
>>> import brainpy as bp
>>> runner = bp.DSRunner()
>>> runner.run(100)
>>> lfp = bp.measure.unitary_LFP(runner.mon.ts, runner.mon['exc.spike'], 'exc')
>>> lfp += bp.measure.unitary_LFP(runner.mon.ts, runner.mon['inh.spike'], 'inh')
  1. Add brainpy.synapses.PoissonInput model
>>> bp.synapse.PoissonInput(target_variable, num_input, freq, weight)
  1. Upgrade brainpy connection methods, improving its speeds. New customization of brainpy Connector can be implemented through
class YourConnector(bp.conn.TwoEndConnector):
  def build_csr(self):
     pass

  def build_coo(self):
    pass

  def build_mat(self):
    pass

Improvements

  1. Support transformation contexts for JaxArray, and improve the error checking of JaxArray updating in a JIT function.

  2. Speedup delay retrieval by reversing delay variable data.

  3. Improve the operator customization methods by using Numba functions.

  4. Fix bugs in GPU operators in brainpylib.

What's Changed

Full Changelog: V2.2.3.1...V2.2.3.2

Version 2.2.3.1

05 Oct 07:18
62a1220

Choose a tag to compare

This release fixes the installation on Windows systems and improves the installation guides in the official documentation and installation process.

The following example shows how to install jaxlib after users install and import brainpy:

>>> import brainpy
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\adadu\miniconda3\envs\py3test\lib\site-packages\brainpy\__init__.py", line 10, in <module>
    raise ModuleNotFoundError(

BrainPy needs jaxlib, please install jaxlib.

1. If you are using Windows system, install jaxlib through

   >>> pip install jaxlib -f https://whls.blob.core.windows.net/unstable/index.html

2. If you are using macOS platform, install jaxlib through

   >>> pip install jaxlib -f https://storage.googleapis.com/jax-releases/jax_releases.html

3. If you are using Linux platform, install jaxlib through

   >>> pip install jaxlib -f https://storage.googleapis.com/jax-releases/jax_releases.html

4. If you are using Linux + CUDA platform, install jaxlib through

   >>> pip install jaxlib -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html

Note that the versions of "jax" and "jaxlib" should be consistent, like "jax=0.3.14", "jaxlib=0.3.14".

More detail installation instruction, please see https://brainpy.readthedocs.io/en/latest/quickstart/installation.html#dependency-2-jax  

Hope this information may help the installation of BrainPy much easiler.

Version 2.2.3

04 Oct 11:22
174c81a

Choose a tag to compare

This release continues to improve the usability of BrainPy.

New Features

  1. Operations among a JaxArray and a NumPy ndarray in a JIT function no longer cause errors.
>>> import numpy as np
>>> import brainpy.math as bm
>>> f = bm.jit(lambda: bm.random.random(3) + np.ones(1))
>>> f
JaxArray([1.2022058, 1.683937 , 1.3586301], dtype=float32)
  1. Initializing a brainpy.math.Variable according to the data shape.
>>> bm.Variable(10, dtype=bm.float32)
Variable([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)
  1. LengthDelay supports a new method called "concatenate" which is compatible with BP training.
>>> delay = bm.LengthDelay(bm.ones(3), 10, update_method='concatenate')
>>> delay.update(bm.random.random(3))
>>> delay.retrieve(0)
DeviceArray([0.17887115, 0.6738142 , 0.75816643], dtype=float32)
>>> delay.retrieve(10)
DeviceArray([0., 0., 0.], dtype=float32)

Note that compared with the default updating method "rotation", this method can be used to train delay models with BP algorithms. However, "concatenate" has a slower speed for delay processing.

  1. Support customizing the plotting styles of fixed points. However, there is still work to support flexible plotting of analyzed results.
>>> from brainpy.analysis import plotstyle, stability
>>> plotstyle.set_plot_schema(stability.SADDLE_NODE, marker='*', markersize=15)

Full Changelog: V2.2.2...V2.2.3

What's Changed

Full Changelog: V2.2.2...V2.2.3

Version 2.2.2

28 Sep 12:33
e25144e

Choose a tag to compare

Bug Fixes

This release fixes several bugs in the BrainPy system, including:

  • The jitted functions in brainpy.measure module no longer exists when they are cleared by brainpy.math.clear_memory_buffer().
  • The bug for clear_input() function.
  • The bug for the monitor in brainpy.integrators.IntegratorRunner

What's Changed

Full Changelog: V2.2.1...V2.2.2

Version 2.2.1

09 Sep 10:43
f087e9d

Choose a tag to compare

This release fixes bugs found in the codebase and improves the usability and functions of BrainPy.

Bug fixes

  1. Fix the bug of operator customization in brainpy.math.XLACustomOp and brainpy.math.register_op. Now, it supports operator customization by using NumPy and Numba interface. For instance,
import brainpy.math as bm

def abs_eval(events, indices, indptr, post_val, values):
      return post_val

def con_compute(outs, ins):
      post_val = outs
      events, indices, indptr, _, values = ins
      for i in range(events.size):
        if events[i]:
          for j in range(indptr[i], indptr[i + 1]):
            index = indices[j]
            old_value = post_val[index]
            post_val[index] = values + old_value

event_sum = bm.XLACustomOp(eval_shape=abs_eval, con_compute=con_compute)
  1. Fix the bug of brainpy.tools.DotDict. Now, it is compatible with the transformations of JAX. For instance,
import brainpy as bp
from jax import vmap

@vmap
def multiple_run(I):
  hh = bp.neurons.HH(1)
  runner = bp.dyn.DSRunner(hh, inputs=('input', I), numpy_mon_after_run=False)
  runner.run(100.)
  return runner.mon

mon = multiple_run(bp.math.arange(2, 10, 2))

New features

  1. Add numpy operators brainpy.math.mat, brainpy.math.matrix, brainpy.math.asmatrix.
  2. Improve translation rules of brainpylib operators, improve its running speeds.
  3. Support DSView of DynamicalSystem instance. Now, it supports defining models with a slice view of a DS instance. For example,
import brainpy as bp
import brainpy.math as bm


class EINet_V2(bp.dyn.Network):
  def __init__(self, scale=1.0, method='exp_auto'):
    super(EINet_V2, self).__init__()

    # network size
    num_exc = int(3200 * scale)
    num_inh = int(800 * scale)

    # neurons
    self.N = bp.neurons.LIF(num_exc + num_inh,
                            V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.,
                            method=method, V_initializer=bp.initialize.Normal(-55., 2.))

    # synapses
    we = 0.6 / scale  # excitatory synaptic weight (voltage)
    wi = 6.7 / scale  # inhibitory synaptic weight
    self.Esyn = bp.synapses.Exponential(pre=self.N[:num_exc], post=self.N,
                                        conn=bp.connect.FixedProb(0.02),
                                        g_max=we, tau=5.,
                                        output=bp.synouts.COBA(E=0.),
                                        method=method)
    self.Isyn = bp.synapses.Exponential(pre=self.N[num_exc:], post=self.N,
                                        conn=bp.connect.FixedProb(0.02),
                                        g_max=wi, tau=10.,
                                        output=bp.synouts.COBA(E=-80.),
                                        method=method)

net = EINet_V2(scale=1., method='exp_auto')
# simulation
runner = bp.dyn.DSRunner(
    net,
    monitors={'spikes': net.N.spike},
    inputs=[(net.N.input, 20.)]
  )
runner.run(100.)

# visualization
bp.visualize.raster_plot(runner.mon.ts, runner.mon['spikes'], show=True)