We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
During the training process, the generated Japanese documents can only recognize . Here is the error report; how can I resolve this?Some weights of the model checkpoint at donut-base-finetuned-docvqa-main were not used when initializing DonutModel: ['encoder.encoder.layers.0.blocks.1.attention.self.key.weight', 'encoder.encoder.layers.2.blocks.0.layernorm_after.bias', 'encoder.encoder.layers.2.blocks.7.layernorm_after.bias', 'encoder.encoder.layers.2.blocks.8.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.10.output.dense.bias', 'encoder.encoder.layers.2.blocks.10.attention.output.dense.bias', 'encoder.encoder.layers.2.blocks.10.attention.self.query.weight', 'encoder.encoder.layers.3.blocks.1.intermediate.dense.bias', 'encoder.encoder.layers.2.blocks.8.attention.self.relative_position_bias_table', 'decoder.model.decoder.layers.1.self_attn.out_proj.weight', 'encoder.encoder.layers.2.blocks.10.attention.self.value.bias', 'encoder.encoder.layers.2.blocks.10.attention.self.key.bias', 'encoder.encoder.layers.2.blocks.12.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.11.intermediate.dense.weight', 'encoder.encoder.layers.2.blocks.1.layernorm_after.weight', 'decoder.model.decoder.layers.2.final_layer_norm.bias', 'encoder.encoder.layers.2.blocks.6.output.dense.weight', 'encoder.encoder.layers.3.blocks.0.layernorm_after.bias', 'encoder.encoder.layers.2.blocks.1.intermediate.dense.bias', 'encoder.encoder.layers.2.blocks.3.layernorm_before.weight', 'encoder.encoder.layers.2.blocks.4.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.7.attention.self.key.weight', 'encoder.encoder.layers.1.blocks.0.attention.self.relative_position_index', 'decoder.model.decoder.layers.0.self_attn.v_proj.weight', 'encoder.encoder.layers.3.blocks.1.attention.output.dense.bias', 'encoder.encoder.layers.1.blocks.0.layernorm_after.weight', 'encoder.encoder.layers.1.blocks.1.output.dense.weight', 'decoder.model.decoder.layers.2.encoder_attn.out_proj.bias', 'decoder.model.decoder.layers.3.self_attn.q_proj.bias', 'decoder.model.decoder.layers.1.fc2.bias', 'encoder.encoder.layers.1.blocks.0.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.0.layernorm_before.weight', 'encoder.encoder.layers.2.blocks.4.attention.output.dense.bias', 'encoder.encoder.layers.2.blocks.3.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.9.layernorm_after.weight', 'encoder.encoder.layers.3.blocks.1.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.4.attention.self.key.weight', 'encoder.encoder.layers.2.blocks.8.attention.self.relative_position_index', 'decoder.model.decoder.layers.2.self_attn_layer_norm.bias', 'decoder.model.decoder.layers.3.encoder_attn_layer_norm.weight', 'decoder.model.decoder.layers.2.self_attn.out_proj.weight', 'decoder.model.decoder.layers.1.encoder_attn.v_proj.bias', 'encoder.encoder.layers.2.blocks.13.attention.self.relative_position_bias_table', 'encoder.encoder.layers.2.blocks.8.intermediate.dense.weight', 'encoder.encoder.layers.2.blocks.13.layernorm_after.bias', 'encoder.encoder.layers.0.blocks.1.output.dense.weight', 'decoder.model.decoder.layers.3.encoder_attn.k_proj.weight', 'encoder.encoder.layers.2.blocks.3.layernorm_after.bias', 'encoder.encoder.layers.3.blocks.0.attention.self.relative_position_index', 'encoder.encoder.layers.2.blocks.2.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.6.layernorm_after.bias', 'encoder.encoder.layers.3.blocks.0.layernorm_before.bias', 'encoder.encoder.layers.2.blocks.11.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.3.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.10.layernorm_before.bias', 'decoder.model.decoder.layers.3.fc2.bias', 'encoder.encoder.layers.2.blocks.11.layernorm_after.bias', 'encoder.encoder.layers.1.blocks.1.attention.self.relative_position_bias_table', 'encoder.encoder.layers.2.blocks.8.attention.self.query.weight', 'encoder.encoder.layers.2.blocks.3.intermediate.dense.weight', 'encoder.encoder.layers.2.blocks.0.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.10.layernorm_before.weight', 'decoder.model.decoder.layers.0.final_layer_norm.bias', 'encoder.encoder.layers.2.blocks.7.attention.self.query.weight', 'encoder.encoder.layers.2.blocks.2.attention.output.dense.weight', 'encoder.encoder.layers.2.blocks.5.layernorm_before.weight', 'decoder.model.decoder.layers.1.self_attn.v_proj.bias', 'encoder.encoder.layers.1.blocks.1.layernorm_before.bias', 'decoder.model.decoder.layers.2.encoder_attn_layer_norm.weight', 'encoder.encoder.layers.2.blocks.7.intermediate.dense.bias', 'encoder.encoder.layers.2.blocks.0.intermediate.dense.bias', 'encoder.encoder.layers.2.blocks.9.attention.self.query.weight', 'decoder.model.decoder.layers.1.encoder_attn.out_proj.bias', 'decoder.model.decoder.layers.0.encoder_attn.v_proj.weight', 'decoder.model.decoder.layers.2.encoder_attn.out_proj.weight', 'encoder.encoder.layers.0.blocks.0.layernorm_before.bias', 'decoder.model.decoder.layers.2.self_attn.v_proj.bias', 'encoder.encoder.layers.1.blocks.0.attention.self.key.bias', 'encoder.encoder.layers.2.blocks.7.attention.output.dense.weight', 'encoder.encoder.layers.1.blocks.1.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.4.layernorm_before.weight',
The text was updated successfully, but these errors were encountered:
No branches or pull requests
During the training process, the generated Japanese documents can only recognize
. Here is the error report; how can I resolve this?Some weights of the model checkpoint at donut-base-finetuned-docvqa-main were not used when initializing DonutModel: ['encoder.encoder.layers.0.blocks.1.attention.self.key.weight', 'encoder.encoder.layers.2.blocks.0.layernorm_after.bias', 'encoder.encoder.layers.2.blocks.7.layernorm_after.bias', 'encoder.encoder.layers.2.blocks.8.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.10.output.dense.bias', 'encoder.encoder.layers.2.blocks.10.attention.output.dense.bias', 'encoder.encoder.layers.2.blocks.10.attention.self.query.weight', 'encoder.encoder.layers.3.blocks.1.intermediate.dense.bias', 'encoder.encoder.layers.2.blocks.8.attention.self.relative_position_bias_table', 'decoder.model.decoder.layers.1.self_attn.out_proj.weight', 'encoder.encoder.layers.2.blocks.10.attention.self.value.bias', 'encoder.encoder.layers.2.blocks.10.attention.self.key.bias', 'encoder.encoder.layers.2.blocks.12.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.11.intermediate.dense.weight', 'encoder.encoder.layers.2.blocks.1.layernorm_after.weight', 'decoder.model.decoder.layers.2.final_layer_norm.bias', 'encoder.encoder.layers.2.blocks.6.output.dense.weight', 'encoder.encoder.layers.3.blocks.0.layernorm_after.bias', 'encoder.encoder.layers.2.blocks.1.intermediate.dense.bias', 'encoder.encoder.layers.2.blocks.3.layernorm_before.weight', 'encoder.encoder.layers.2.blocks.4.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.7.attention.self.key.weight', 'encoder.encoder.layers.1.blocks.0.attention.self.relative_position_index', 'decoder.model.decoder.layers.0.self_attn.v_proj.weight', 'encoder.encoder.layers.3.blocks.1.attention.output.dense.bias', 'encoder.encoder.layers.1.blocks.0.layernorm_after.weight', 'encoder.encoder.layers.1.blocks.1.output.dense.weight', 'decoder.model.decoder.layers.2.encoder_attn.out_proj.bias', 'decoder.model.decoder.layers.3.self_attn.q_proj.bias', 'decoder.model.decoder.layers.1.fc2.bias', 'encoder.encoder.layers.1.blocks.0.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.0.layernorm_before.weight', 'encoder.encoder.layers.2.blocks.4.attention.output.dense.bias', 'encoder.encoder.layers.2.blocks.3.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.9.layernorm_after.weight', 'encoder.encoder.layers.3.blocks.1.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.4.attention.self.key.weight', 'encoder.encoder.layers.2.blocks.8.attention.self.relative_position_index', 'decoder.model.decoder.layers.2.self_attn_layer_norm.bias', 'decoder.model.decoder.layers.3.encoder_attn_layer_norm.weight', 'decoder.model.decoder.layers.2.self_attn.out_proj.weight', 'decoder.model.decoder.layers.1.encoder_attn.v_proj.bias', 'encoder.encoder.layers.2.blocks.13.attention.self.relative_position_bias_table', 'encoder.encoder.layers.2.blocks.8.intermediate.dense.weight', 'encoder.encoder.layers.2.blocks.13.layernorm_after.bias', 'encoder.encoder.layers.0.blocks.1.output.dense.weight', 'decoder.model.decoder.layers.3.encoder_attn.k_proj.weight', 'encoder.encoder.layers.2.blocks.3.layernorm_after.bias', 'encoder.encoder.layers.3.blocks.0.attention.self.relative_position_index', 'encoder.encoder.layers.2.blocks.2.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.6.layernorm_after.bias', 'encoder.encoder.layers.3.blocks.0.layernorm_before.bias', 'encoder.encoder.layers.2.blocks.11.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.3.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.10.layernorm_before.bias', 'decoder.model.decoder.layers.3.fc2.bias', 'encoder.encoder.layers.2.blocks.11.layernorm_after.bias', 'encoder.encoder.layers.1.blocks.1.attention.self.relative_position_bias_table', 'encoder.encoder.layers.2.blocks.8.attention.self.query.weight', 'encoder.encoder.layers.2.blocks.3.intermediate.dense.weight', 'encoder.encoder.layers.2.blocks.0.attention.self.value.weight', 'encoder.encoder.layers.2.blocks.10.layernorm_before.weight', 'decoder.model.decoder.layers.0.final_layer_norm.bias', 'encoder.encoder.layers.2.blocks.7.attention.self.query.weight', 'encoder.encoder.layers.2.blocks.2.attention.output.dense.weight', 'encoder.encoder.layers.2.blocks.5.layernorm_before.weight', 'decoder.model.decoder.layers.1.self_attn.v_proj.bias', 'encoder.encoder.layers.1.blocks.1.layernorm_before.bias', 'decoder.model.decoder.layers.2.encoder_attn_layer_norm.weight', 'encoder.encoder.layers.2.blocks.7.intermediate.dense.bias', 'encoder.encoder.layers.2.blocks.0.intermediate.dense.bias', 'encoder.encoder.layers.2.blocks.9.attention.self.query.weight', 'decoder.model.decoder.layers.1.encoder_attn.out_proj.bias', 'decoder.model.decoder.layers.0.encoder_attn.v_proj.weight', 'decoder.model.decoder.layers.2.encoder_attn.out_proj.weight', 'encoder.encoder.layers.0.blocks.0.layernorm_before.bias', 'decoder.model.decoder.layers.2.self_attn.v_proj.bias', 'encoder.encoder.layers.1.blocks.0.attention.self.key.bias', 'encoder.encoder.layers.2.blocks.7.attention.output.dense.weight', 'encoder.encoder.layers.1.blocks.1.layernorm_after.weight', 'encoder.encoder.layers.2.blocks.4.layernorm_before.weight',The text was updated successfully, but these errors were encountered: