Skip to content

Releases: infer-actively/pymdp

0.0.7.1

25 Mar 17:58
Compare
Choose a tag to compare

Patched v0.0.7

What's Changed

  • Add comma after Pillow requirement by @SWauthier in #110 which also addresses Issue #113

New Contributors

Full Changelog: v0.0.7...v0.0.7.1

0.0.7

08 Dec 15:31
Compare
Choose a tag to compare

What's Changed

  • plot_beliefs, plot_likelihood now functions in utils.py
  • Bug fix in calculation of variational free energy during message passing routines (see #98)
  • Allow optional distributional observations in agent.infer_states(obs) with a new optional boolean argument distrib_obs, for use in hierarchical set-ups
  • added two new environments (envs.visual_foraging.SceneConstruction and envs.visual_foraging.RandomDotMotion)
  • added functions for quickly initialising empty A and B arrays

Full Changelog: v0.0.6...v0.0.7

0.0.6

24 Aug 21:16
Compare
Choose a tag to compare

What's Changed

  • Bug fixes (see #84, #86)
  • Optional action sampling precision as input to Agent (equivalent of alpha parameter in SPM) to address Issue #81: #88
  • Can sample directly from policy posterior using new sample_policy() function in control.py: #89
  • Fixed failing documentation: #92

Full Changelog: v0.0.5...v0.0.6

0.0.5

25 Apr 21:04
Compare
Choose a tag to compare

Release notes

  • bug fixes, notably fix in B matrix learning (pymdp.learning.update_state_likelihood_dirichlet()), see here
  • more full documentation based on JOSS reviews, see here
  • include archived version (using Zenodo) with 0.0.5 release (doi: 10.5281/zenodo.6484849), for simultaneous publication of JOSS paper

pymdp 0.0.4

10 Jan 14:50
Compare
Choose a tag to compare

Updates include:

  • Read the Docs / Sphinx-based documentation, with references thereto in a cleaned-up/shortened README.md
  • new epistemic chaining demo (on README.md and in documentation)
  • renaming of many inference.py and learning.py functions to make them more understandable

BUG FIXES:

  • corrected action sampling routine (in pymdp.control.sample_action()) so that when marginalizing posterior probabilities from each policy, per action, we only sum the probabilities assigned to action for the next / successive timestep, not for all timesteps. Before this fix, this can lead to maladaptive behavior in many scenarios where temporally-deep planning is required.

pymdp 0.0.3

27 Oct 20:53
12f039f
Compare
Choose a tag to compare

Updates include:

  • more demo colab notebooks now linked on main page
  • model checks in when constructing an Agent() (e.g. normalization checks)
  • D vector learning / bayesian model reduction
  • can pass in E vector (prior over policies) to Agent() constructor
  • time-varying prior preferences (i.e. C can now be a matrix rather than having to be a vector)
  • updated dependencies in setup.py to allow forwards compatibility with newer versions of various packages