You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create a model and build the model containing 25 columns with dbt run
Add a new column to the model and build dbt run again:
There are errors like below
`xxx`.`xxxx` requires that the data to be inserted have the same number of columns as the target table: target table has 25 column(s) but the inserted data has 26 column(s), including 0 partition column(s) having constant value(s).
Who will this benefit?
When the schema change and users want to refresh the table, --full-refresh will be helpful.
Are you interested in contributing this feature?
Yes.
The text was updated successfully, but these errors were encountered:
My understanding is that in the original dbt with full fresh, it backup and rename the old table, build the table for the model, and drop the backup old table.
However, as dbt-glue (and dbt-spark) is using external table. Even we can rename the table, the data stays in the same location. So the backup doesn't make sense.
How about just drop the table only when full-fresh ?
Is there any update on this ? I am facing hard time to use Hudi with full refresh, this functionality is very important. It will allow to use Hudi indexes also for full load tables and increase performance
Describe the feature
When create a model with strategy
insert_overwrite
, the--full-refresh
seems not to work.Describe alternatives you've considered
From the code it seems
full fresh
only supports view inincremental
https://github.com/aws-samples/dbt-glue/blob/main/dbt/include/glue/macros/materializations/incremental/incremental.sql#L63
Additional context
Step to reproduce:
dbt run
dbt run
again:There are errors like below
Who will this benefit?
When the schema change and users want to refresh the table,
--full-refresh
will be helpful.Are you interested in contributing this feature?
Yes.
The text was updated successfully, but these errors were encountered: