Skip to content

Commit

Permalink
v.2.1-beta-1 (2.1.b1)
Browse files Browse the repository at this point in the history
This fixes a critical bug affecting half-set (gold-standard) refinement.

Starting with version 2.0.4, each iteration of auto-refinement incorrectly
uses the same half-set reconstruction as for alignment of all data. The FSC
and resolution reported is however still based on comparing two half-set
reconstructions which contain completely separate image data. A
degree of reference-bias is however present.  The effect is to report
a resolution higher than what should be under gold-standard conditions,
since, reconstructions are subject to a degree of over-fitting.

In order to claim a so-called "gold-standard" resolution, use any version
before and including 2.0.3, or version 2.1b1 (this release).
  • Loading branch information
bforsbe committed Aug 17, 2017
1 parent 1b9f08a commit e7607a8
Show file tree
Hide file tree
Showing 2 changed files with 36 additions and 30 deletions.
2 changes: 1 addition & 1 deletion src/macros.h
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
#ifndef MACROS_H
#define MACROS_H

#define RELION_VERSION "2.1-beta-0"
#define RELION_VERSION "2.1-beta-1"

#include <math.h>
#include "src/error.h"
Expand Down
64 changes: 35 additions & 29 deletions src/ml_optimiser_mpi.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -664,21 +664,24 @@ void MlOptimiserMpi::initialiseWorkLoad()

MPI_Barrier(MPI_COMM_WORLD);

if(!node->isMaster())
{
/* Set up a bool-array with reference responsibilities for each rank. That is;
* if(PPrefRank[i]==true) //on this rank
* (set up reference vol and MPI_Bcast)
* else()
* (prepare to receive from MPI_Bcast)
*/
mymodel.PPrefRank.assign(mymodel.PPref.size(),true);
if(!do_split_random_halves) //only Bcast-prep on classification, in other cases halves have been separately made by all ranks after data exchange in maximization
{
if(!node->isMaster())
{
/* Set up a bool-array with reference responsibilities for each rank. That is;
* if(PPrefRank[i]==true) //on this rank
* (set up reference vol and MPI_Bcast)
* else()
* (prepare to receive from MPI_Bcast)
*/
mymodel.PPrefRank.assign(mymodel.PPref.size(),true);

for(int i=0; i<mymodel.PPref.size(); i++)
mymodel.PPrefRank[i] = ((i)%(node->size-1) == node->rank-1);
}
for(int i=0; i<mymodel.PPref.size(); i++)
mymodel.PPrefRank[i] = ((i)%(node->size-1) == node->rank-1);
}
MPI_Barrier(MPI_COMM_WORLD);
}

MPI_Barrier(MPI_COMM_WORLD);
//#define DEBUG_WORKLOAD
#ifdef DEBUG_WORKLOAD
std::cerr << " node->rank= " << node->rank << " my_first_ori_particle_id= " << my_first_ori_particle_id << " my_last_ori_particle_id= " << my_last_ori_particle_id << std::endl;
Expand Down Expand Up @@ -754,26 +757,29 @@ void MlOptimiserMpi::expectation()
timer.toc(TIMING_EXP_1a);
#endif

if (!node->isMaster())
for(int i=0; i<mymodel.PPref.size(); i++)
if(!do_split_random_halves) //only Bcast on classification, in other cases halves have been separately made by all ranks after data exchange in maximization
{
if (!node->isMaster())
{
/* NOTE: the first slave has rank 0 on the slave communicator node->slaveC,
* that's why we don't have to add 1, like this;
* int sender = (i)%(node->size - 1)+1 ;
*/

int sender = (i)%(node->size - 1); // which rank did the heavy lifting? -> sender of information
for(int i=0; i<mymodel.PPref.size(); i++)
{
// Communicating over all slaves means we don't have to allocate on the master.
node->relion_MPI_Bcast(MULTIDIM_ARRAY(mymodel.PPref[i].data),
MULTIDIM_SIZE(mymodel.PPref[0].data), MY_MPI_COMPLEX, sender, node->slaveC);
node->relion_MPI_Bcast(MULTIDIM_ARRAY(mymodel.tau2_class[i]),
MULTIDIM_SIZE(mymodel.tau2_class[0]), MY_MPI_DOUBLE, sender, node->slaveC);
/* NOTE: the first slave has rank 0 on the slave communicator node->slaveC,
* that's why we don't have to add 1, like this;
* int sender = (i)%(node->size - 1)+1 ;
*/

int sender = (i)%(node->size - 1); // which rank did the heavy lifting? -> sender of information
{
// Communicating over all slaves means we don't have to allocate on the master.
node->relion_MPI_Bcast(MULTIDIM_ARRAY(mymodel.PPref[i].data),
MULTIDIM_SIZE(mymodel.PPref[0].data), MY_MPI_COMPLEX, sender, node->slaveC);
node->relion_MPI_Bcast(MULTIDIM_ARRAY(mymodel.tau2_class[i]),
MULTIDIM_SIZE(mymodel.tau2_class[0]), MY_MPI_DOUBLE, sender, node->slaveC);
}
}
}

MPI_Barrier(MPI_COMM_WORLD);

MPI_Barrier(MPI_COMM_WORLD);
}
#ifdef DEBUG
if(node->rank==2)
{
Expand Down

0 comments on commit e7607a8

Please sign in to comment.