-
Notifications
You must be signed in to change notification settings - Fork 3
FAQ
Contents
-
FAQ: Bluesky Frequently-Asked Questions for the APS
- Q1: What is the relationship between Bluesky and PyEpics?
- Q2: Is there a Cheat Sheet of the basic commands?
- Q3: How would I move a motor
m1
in blueSky? - Q4: What Python packages (and/or modules) are used? Where are they on the
DSERV
or/APSshare
file server? - Q5: How to learn Bluesky?
- Q6: Is it possible to make the error dumps less detailed?
Is PyEpics underneath Bluesky? And do we need full blown Bluesky or can I just use PyEpics?
Bluesky talks with ophyd. Ophyd talks with PyEpics for all EPICS PVs. PyEpics uses libca and libCom (both from EPICS base) to communicate with IOCs using Channel Access.
The best entrypoint at the moment is probably the Bluesky on-line tutorial. There is not an FAQ, but it's a good note. This FAQ page was started in response to this question.
-
The Bluesky Framework has component packages.
-
The bluesky package coordinates data collection methods.
-
The ophyd package describes how the hardware is interfaced (read, control, ...). For EPICS, ophyd uses PyEpics (or caproto which APS is not planning to use) to communicate with every PV.
-
The databroker package receives data from "bluesky" and stores it. APS is using the mongodb backend to store most data, with area detector and other such large data stored in files on disk.
-
In principle, you can use direct PyEpics to control EPICS. PyEpics provides all you need, but not all you probably want.
Via the interface chain from IOC:ChannelAccess:PyEpics:ophyd:bluesky, the bluesky RunEngine provides a standard set of tools for managing the data acquisition workflow. (In PyEpics, you would have to implement these tools for yourself. Your choices would optimize certain features and probably complicate or eliminate other useful features the bluesky RunEngine provides.)
The ophyd
package has a means to connect with a single PV (as an
ophyd.EpicsSignal
object) or to group a set of PVs together into an
ophyd.Device
. And Devices can contain Signals and Devices, which
enables, for example, a single Device to describe the EPICS Area
Detector Acquire button (my_pilatus:cam1:Acquire
PV): adpilatus.cam.acquire
An EPICS motor record has lots of use fields: .VAL
& .RBV
come to mind
first. In ophyd, most of the useful fields are integrated into the
ophyd.EpicsMotor Device. .VAL
maps to the .user_setpoint
and .RBV
maps
to the .user_readback
. Other fields, such as .DMOV
, .STOP
and the soft
limits, are mapped to attributes so the "motor" can be operated from the
ophyd layer or from bluesky.
In [4]: m1
Out[4]: MyEpicsMotor(prefix='gpjemian:m1', name='m1', settle_time=0.0,
timeout=None, read_attrs=['user_readback', 'user_setpoint'],
configuration_attrs=['user_offset', 'user_offset_dir', 'velocity',
'acceleration', 'motor_egu'])
In [5]: m1.summary()
data keys (* hints)
-------------------
*m1
m1_user_setpoint
read attrs
----------
user_readback EpicsSignalRO ('m1')
user_setpoint EpicsSignal ('m1_user_setpoint')
config keys
-----------
m1_acceleration
m1_motor_egu
m1_user_offset
m1_user_offset_dir
m1_velocity
configuration attrs
-------------------
user_offset EpicsSignal ('m1_user_offset')
user_offset_dir EpicsSignal ('m1_user_offset_dir')
velocity EpicsSignal ('m1_velocity')
acceleration EpicsSignal ('m1_acceleration')
motor_egu EpicsSignal ('m1_motor_egu')
unused attrs
------------
offset_freeze_switch EpicsSignal ('m1_offset_freeze_switch')
set_use_switch EpicsSignal ('m1_set_use_switch')
motor_is_moving EpicsSignalRO ('m1_motor_is_moving')
motor_done_move EpicsSignalRO ('m1_motor_done_move')
high_limit_switch EpicsSignalRO ('m1_high_limit_switch')
low_limit_switch EpicsSignalRO ('m1_low_limit_switch')
high_limit_travel EpicsSignal ('m1_high_limit_travel')
low_limit_travel EpicsSignal ('m1_low_limit_travel')
direction_of_travel EpicsSignal ('m1_direction_of_travel')
motor_stop EpicsSignal ('m1_motor_stop')
home_forward EpicsSignal ('m1_home_forward')
home_reverse EpicsSignal ('m1_home_reverse')
steps_per_revolution EpicsSignal ('m1_steps_per_revolution')
https://github.com/BCDA-APS/use_bluesky/blob/main/first_steps_guide.md
I found the
RE(scan())
command, I don’t think that’s what I want.
In a plan, yield from bps.mv(m1, 1.234)
. On the command line, either with the RunEngine: RE(bps.mv(m1, 1.234)
or with the raw ophyd interface: m1move(1.234)
Q4: What Python packages (and/or modules) are used? Where are they on the DSERV
or /APSshare
file server?
BCDA provides several (?perhaps 7 or 8?) versions of Python on /APSshare
for linux operating systems. None of these are updated during operations periods unless absolutely necessary. Look for the HISTORY.txt
file for a manual log of the installation/maintenance steps.
-
Miniconda (
/APSshare/miniconda
) : x86, 64-bit, Python v3, standard library, and conda, use this as thebase
environment for most custom conda environments -
Anaconda (
/APSshare/anaconda3
&/APSshare/anaconda
, bothx86_64
&x86
) : Miniconda (Python3 and 2, respectively) plus a lot pf packages, good for general Python use -
Bluesky (
/APSshare/anaconda3/Bluesky
) : common bluesky support libraries (includes IPython, & Jupyter) -
EPD (
/APSshare/epd
) : legacy Python2 support, maintenance ended ca. 2012, do not use for new projects -
Python (?
/APSshare/bin/python
?) : legacy support, maintenance ended before 2006, do not use for new projects
Users are encouraged to create and use custom conda environments (with 1-3 above) so they have some control over the package versions they use. (An environment can be defined by a YAML text file.) First, activate the base
conda environment for the distribution from a /bin/bash
shell: source ${DIST}/bin/activate
(example: DIST=/APSshare/anaconda3/x86_64
).
-
conda list
: get the list of packages known to conda -
conda list -r
: get the installation history of these packages
At least 1-4 have pip
available. Run pip list
from any of these to get the package list known to pip.
The use of conda environments to control the packages to be used is strongly encouraged. These environments may be created easily from a YAML file (such as this example to install Bluesky packages). Use Miniconda as the base environment. Line 4 of that file contains a command to create the environment, given the YAML file. Before you run line 4, activate a base environment from a bash shell, such as source /APSshare/miniconda/bin/activate
, download the raw YAML file from GitHub, then execute the commands in the comment on line 4 of that file.
Various training resources are available online:
- APS Bluesky 101 introductory course
- APS-specific Training materials
- other training references for APS
- Try Bluesky live notebooks online
- Virtual Machine Complete software stack running as Linux workstation virtual machine.
Documentation of various library packages is also available
- apstools APS-specific package documentation
- Bluesky home page for the Bluesky framework
- bluesky package documentation (measurement orchestration)
- databroker package documentation (experiment data)
- hklpy package documentation (diffractometer)
- ophyd package documentation (hardware abstraction)
When running a plan with the bluesky RunEngine (such as
RE(some_plan_with_many_custom_steps())
), if an error occurs, the error dump is quite long. Can it be shorter?
There is a mechanism in ipython to make error dumps brief: %xmode
, as described:
- https://ipythonbook.com/magic/xmode.html
- https://jakevdp.github.io/PythonDataScienceHandbook/01.06-errors-and-debugging.html
- https://ipython.readthedocs.io/en/stable/interactive/magics.html?highlight=xmode#magic-xmode
Refer to this apstools
issue for a detailed commentary and comparison.
There have been both requests & efforts in bluesky to make the tracebacks more brief and to the point. One of the suggestions for improvement would still require major refactoring of the bluesky and possibly ophyd code, so that has not yet been scheduled for implementation. If it was easy, it would have been implemented by now.
The usual approach to inspection of a long error dump (looking for what and why the error happened) is:
- Read the error message (in the last line or two of the error output).
ValueError
,TypeError
,KeyError
, or some such. - Scroll back from the end of the error output, looking for the last time your code was mentioned. Assuming that it was your own code that triggered the problem, this would likely be the line that triggered the error. Or the call before it.
On the ipython session command line: %xmode Minimal
is about the least we can get. (Valid modes: Plain, Context, Verbose, and Minimal.) You can get this feature set automatically by adding a line such as
get_ipython().run_line_magic('xmode', 'Minimal')
into one of your startup files. The get_ipython()
function is already imported when you are using a ipython
(or Jupyter) session. It will error for non-ipython sessions (which is good since it does nothing outside of ipython).