Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move docs to docstrings and generate documentation with sphinx-autodoc #367

Merged
merged 5 commits into from
Oct 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 2 additions & 52 deletions doc/source/api.rst
Original file line number Diff line number Diff line change
@@ -1,55 +1,5 @@
API Reference
=============

.. py:function:: frame_to_hyper(df: pd.DataFrame, database: Union[str, pathlib.Path], *, table: Union[str, tableauhyperapi.Name, tableauhyperapi.TableName], table_mode: str = "w", not_null_columns: Optional[Iterable[str]] = None, json_columns: Optional[Iterable[str]] = None, geo_columns: Optional[Iterable[str]] = None) -> None:

Convert a DataFrame to a .hyper extract.

:param df: Data to be written out.
:param database: Name / location of the Hyper file to write to.
:param table: Table to write to.
:param table_mode: The mode to open the table with. Default is "w" for write, which truncates the file before writing. Another option is "a", which will append data to the file if it already contains information.
:param not_null_columns: Columns which should be considered "NOT NULL" in the target Hyper database. By default, all columns are considered nullable
:param json_columns: Columns to be written as a JSON data type
:param geo_columns: Columns to be written as a GEOGRAPHY data type
:param process_params: Parameters to pass to the Hyper Process constructor.

.. py:function:: frame_from_hyper(source: Union[str, pathlib.Path, tab_api.Connection], *, table: Union[str, tableauhyperapi.Name, tableauhyperapi.TableName], return_type: Literal["pandas", "pyarrow", "polars"] = "pandas")

Extracts a DataFrame from a .hyper extract.

:param source: Name / location of the Hyper file to be read or Hyper-API connection.
:param table: Table to read.
:param return_type: The type of DataFrame to be returned
:param process_params: Parameters to pass to the Hyper Process constructor.


.. py:function:: frames_to_hyper(dict_of_frames: Dict[Union[str, tableauhyperapi.Name, tableauhyperapi.TableName], pd.DataFrame], database: Union[str, pathlib.Path], *, table_mode: str = "w", not_null_columns: Optional[Iterable[str]] = None, json_columns: Optional[Iterable[str]] = None, geo_columns: Optional[Iterable[str]] = None,) -> None:

Writes multiple DataFrames to a .hyper extract.

:param dict_of_frames: A dictionary whose keys are valid table identifiers and values are dataframes
:param database: Name / location of the Hyper file to write to.
:param table_mode: The mode to open the table with. Default is "w" for write, which truncates the file before writing. Another option is "a", which will append data to the file if it already contains information.
:param not_null_columns: Columns which should be considered "NOT NULL" in the target Hyper database. By default, all columns are considered nullable
:param json_columns: Columns to be written as a JSON data type
:param geo_columns: Columns to be written as a GEOGRAPHY data type
:param process_params: Parameters to pass to the Hyper Process constructor.

.. py:function:: frames_from_hyper(source: Union[str, pathlib.Path, tab_api.Connection], *, return_type: Literal["pandas", "pyarrow", "polars"] = "pandas") -> dict:

Extracts tables from a .hyper extract.

:param source: Name / location of the Hyper file to be read or Hyper-API connection.
:param return_type: The type of DataFrame to be returned
:param process_params: Parameters to pass to the Hyper Process constructor.


.. py:function:: frame_from_hyper_query(source: Union[str, pathlib.Path, tab_api.Connection], query: str, *, return_type: Literal["pandas", "polars", "pyarrow"] = "pandas",)

Executes a SQL query and returns the result as a pandas dataframe

:param source: Name / location of the Hyper file to be read or Hyper-API connection.
:param query: SQL query to execute.
:param return_type: The type of DataFrame to be returned
:param process_params: Parameters to pass to the Hyper Process constructor.
.. automodule:: pantab
:members:
14 changes: 14 additions & 0 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,10 @@
import pathlib
import sys
from typing import List

srcdir = pathlib.Path(__file__).resolve().parent.parent.parent / "src"
sys.path.insert(0, str(srcdir))

# -- Project information -----------------------------------------------------

project = "pantab"
Expand All @@ -13,6 +18,8 @@
extensions = [
"sphinx_rtd_theme",
"sphinxext.opengraph",
"sphinx.ext.autodoc",
"sphinx_autodoc_typehints",
]

templates_path = ["_templates"]
Expand All @@ -35,3 +42,10 @@
ogp_site_url = "https://pantab.readthedocs.io/"
ogp_use_first_image = False
ogp_image = "https://pantab.readthedocs.io/en/latest/_static/pantab_logo.png"

# -- Options for autodoc -----------------------------------------------------

autodoc_mock_imports = ["pantab.libpantab"]
autodoc_typehints = "none"
typehints_use_signature = True
typehints_use_signature_return = True
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ dependencies:
- pyarrow
- python
- pytest
- pytest_xdist
- scikit-build-core
- sphinx
- sphinx-autodoc-typehints
- pre-commit
- sphinx_rtd_theme
- sphinxext-opengraph
Expand Down
26 changes: 23 additions & 3 deletions src/pantab/_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,14 @@ def frame_from_hyper_query(
return_type: Literal["pandas", "polars", "pyarrow"] = "pandas",
process_params: Optional[dict[str, str]] = None,
):
"""See api.rst for documentation."""
"""
Executes a SQL query and returns the result as a pandas dataframe

:param source: Name / location of the Hyper file to be read or Hyper-API connection.
:param query: SQL query to execute.
:param return_type: The type of DataFrame to be returned
:param process_params: Parameters to pass to the Hyper Process constructor.
"""
if process_params is None:
process_params = {}

Expand Down Expand Up @@ -44,7 +51,14 @@ def frame_from_hyper(
return_type: Literal["pandas", "polars", "pyarrow"] = "pandas",
process_params: Optional[dict[str, str]] = None,
):
"""See api.rst for documentation"""
"""
Extracts a DataFrame from a .hyper extract.

:param source: Name / location of the Hyper file to be read or Hyper-API connection.
:param table: Table to read.
:param return_type: The type of DataFrame to be returned
:param process_params: Parameters to pass to the Hyper Process constructor.
"""
if isinstance(table, (pt_types.TableauName, pt_types.TableauTableName)):
tbl = str(table)
elif isinstance(table, tuple):
Expand All @@ -65,7 +79,13 @@ def frames_from_hyper(
return_type: Literal["pandas", "polars", "pyarrow"] = "pandas",
process_params: Optional[dict[str, str]] = None,
):
"""See api.rst for documentation."""
"""
Extracts tables from a .hyper extract.

:param source: Name / location of the Hyper file to be read or Hyper-API connection.
:param return_type: The type of DataFrame to be returned
:param process_params: Parameters to pass to the Hyper Process constructor.
"""
result = {}

table_names = libpantab.get_table_names(str(source))
Expand Down
25 changes: 23 additions & 2 deletions src/pantab/_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,18 @@ def frame_to_hyper(
geo_columns: Optional[set[str]] = None,
process_params: Optional[dict[str, str]] = None,
) -> None:
"""See api.rst for documentation"""
"""
Convert a DataFrame to a .hyper extract.

:param df: Data to be written out.
:param database: Name / location of the Hyper file to write to.
:param table: Table to write to.
:param table_mode: The mode to open the table with. Default is "w" for write, which truncates the file before writing. Another option is "a", which will append data to the file if it already contains information.
:param not_null_columns: Columns which should be considered "NOT NULL" in the target Hyper database. By default, all columns are considered nullable
:param json_columns: Columns to be written as a JSON data type
:param geo_columns: Columns to be written as a GEOGRAPHY data type
:param process_params: Parameters to pass to the Hyper Process constructor.
"""
frames_to_hyper(
{table: df},
database,
Expand All @@ -79,7 +90,17 @@ def frames_to_hyper(
geo_columns: Optional[set[str]] = None,
process_params: Optional[dict[str, str]] = None,
) -> None:
"""See api.rst for documentation."""
"""
Writes multiple DataFrames to a .hyper extract.

:param dict_of_frames: A dictionary whose keys are valid table identifiers and values are dataframes
:param database: Name / location of the Hyper file to write to.
:param table_mode: The mode to open the table with. Default is "w" for write, which truncates the file before writing. Another option is "a", which will append data to the file if it already contains information.
:param not_null_columns: Columns which should be considered "NOT NULL" in the target Hyper database. By default, all columns are considered nullable
:param json_columns: Columns to be written as a JSON data type
:param geo_columns: Columns to be written as a GEOGRAPHY data type
:param process_params: Parameters to pass to the Hyper Process constructor.
"""
_validate_table_mode(table_mode)

if not_null_columns is None:
Expand Down
Loading