Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Solidify the Dataset class and adapt working with Multi-band rasters (#…
…45) * rename examples notebook * deprecate the path parameter from the to_polygon method * rename to_polygon to cluster2, the column name in the returned gdf has the band name * arrage data * rename to_geo_dataframe to to_feature_collection * add missing data * arrange test data * arrange test data * arrange test data * arrange test data * arrange test data * arrange test data * arrange test data * arrange test data * arrange test data * update * arrange tests data * arrange test data * change the era5 no data value to -9999 * add test case for the error in the to_dataset bug * add layer_count property to featurecollection * add layer_names property to featurecollection class * change the return value of the _gdf_to_ds method to return Featurecollection not DataSource * add test_gdf_to_ds_if_feature_is_already_ds test * add columns property to feature collection class * read folder using regex instead of renaming files * python 3.10 * fix file path * add docs for Datacube.read_separate_files * add repre method, files property, update docs * rename method read_separate_files to read_multiple_files * rename data attribute in the datacube class to values * rename data attribute in the datacube class to values * rename the TestOpenDataCube test * add read_multiple_files documentations * update docs * add datacube rst file to the index * correct index datacube name in the index * add docs missing images * replace the convert_longitude method content to use gdal.warp * add comments in the convert_longitude method * update convert_longitude example files * refactor * change bounds to total_bounds * get extent of datasource * change the _gdf_to_ds implementation * add _crop_with_polygon_warp method * change the inputs for the read_multiple_files to take a regex string * add docs to checklist * test read files based on number in the file name * update * refactor the read_multiple_files method * the start and end parameter in the read_multiple_files can take integer values not only dates * allow values.setter to set values even if the files were not read yet * path parameter to the to_file methof can be a list of paths * deprecate update_Datacube and replace it with a setter for the values property * refactor * remove repaeted readthedocs badge * update detailed scheme of datacube docs * return the Array object from the plot method * add datacube example * add file_name to get the file name of an ogr.Datasource and gdal.Dataset, change the ogr.Datasource to a gdal.Dataset in the _gdf_to_ds * add a gdal_dataset parameter to the _gdf_to_ds method * add ogr_ds_togdal_dataset method * refactor * change the _ds_to_gdf to use VectorTranslate * correct the name of the MEMORY_FILE * print the gdal_ds for debugging * add gdal.UseExceptions() in the featurecollection module * change the layername to be extracted from the gdal_ds object * remove the driver type in the gpd.read_file in _ds_to_gdf * add layer name to gpd.read_file from memory * remove the event branch to run on all branches * correct name * add pypi to the workflow name * add ubuntu conda workflow * install the dev packages using pip * add conda optional packages * remove installing cleopatra in the workflow file as it is in the requirement.txt/environment.yml file * replace conda with the market place conda * activate test env instead of the base env * remove mamba * trigger ci on push only * import cleopatra inside the test as it raise an import error in case testing the main package-cleopatra not installed * include python 3.11 in the yml file * remove upper limit of python version * remote test for python 3.11 as depencdencies still have the <3.11 in their yml file * remove testing conda for python 3.9 * remove the python setup.py install as it reinstall packages with pip in conda test * add shell specification line * clean files * gather workflow of conda and pypi * remove redundent workflows * add two cases to ds_to_gdf in memory and io * refactor _ds_to_gdf * the to_dataset uses the _gdf_to_ds to convert the gdf to a gdal dataset instead of writting the the vector to disk * add pivot point attribute to the feature collection class * change the gdal raster.RasterCount in the get_band_names method to use the method band_count * add set_band_names method * add band_name setter * column_name can take a one column, multiple or all, multi-band raster is enabled * remove writting the mask vector to disk as it is not used anymore * comment and refactor to_dataset method * add missing test data * update pre-commit hooks * migrate to shapely2.x * update dependencies * update gdal to 3.7.0 * rename _create_driver_from_scratch to create_driver_from_scratch * add function to convert ogr datatype code into numpy data type * update python version for conda build in actions to 3.11 * correct the ci to run plotiing tests in the optional package scenario * _read_variable does not raise an error if the variable does not exist but return None * add dtypes property to featurecollection object * convert ogr.OFTInteger to int64 instead of int32 to unify it with how geopandas read it * add qodana CI yaml file * exclude paths fro qodana * add conversion table between numpy, ogr, and gdal * replace converting numpy data types to gdal data types to use the conversion table instead of the gdal_array.NumericTypeCodeToGDALTypeCode fns * hard code the OGR.OFTInteger(0) to np.int32, OGR.OFTInteger64 (12) to np.int64, and OGR.Real (2) to np.float64 * add unit test for the gdal_to_numpy_dtype function * use the DTYPE_CONVERSION_DF in the gdal_to_numpy_dtype function instead of the NUMPY_GDAL_DATA_TYPES dictionary * raise error if the gdal data type code does not exist in the DTYPE_CONVERSION_DF * move calculating the dtype from the __init__ method to the dtype property * add numpy_dtype property * rename dtype to gdal_dtype * create dtype, and numpy_dtype properties * change the scope of some fixtures to function * change scopr of some features to function * adjust the nodatavalue to use the new gdal_dtype, dtype and numpy_dtype * use the DTYPE_CONVERSION_DF in the gdal_to_ogr_dtype function * numpy_to_gdal_dtype can take a numpy dtype instead of an array * add exceptions in the _check_no_data_value to capture when the no_data_value can not be set * use the overflowerror generally to replace the no_data_value by the default one * refactor tests * clean * ready array with the same dtype as the raster band, in the _crop_alligned use the DEFAULT_NO_DATA_VALUE if the no_data_value gives an error * refactor * deal with the problem of the None no_data_value with the unsigned integer dtype in the _check_no_data_value function * correct the no_data_value in the raster * change the to_featurecollection to use the crop method * refactor * add Numbers as a datatype * add excetion handler in the to_file method to go around saving a raster with color table * remove the test result file if exist * rename ci files * fix gdal versin to solve to gdal.Wrap error * fix gdal version in ci to 3.7.1 * add missing test data * add color_table property * add color_table documentations * add misssing test data * move color_table to test_plot as it uses cleopatra package * add the plot decorator on the color_table tests * refactor * update installation docs * ignore old files * refactor * change return value from _window and convert _window, get_tile from static to normal mehtod * refactor * rest _window * refactor * reorder imports and rename locate_points to map_to_array_coordinates * rename tests for the map_to_array_coordinates renamed method * test array_to_map_coordinates * refactor featurecollection module * extend the functionality of create_point to create a feature_collection if epsg is given * refactor * clean * update setup.py * update history file * update version number
- Loading branch information