Skip to content

Commit

Permalink
Fix Python script error introduced in D47385306
Browse files Browse the repository at this point in the history
Summary: - Update Python template to resolve the runtime error introduced in D47385306

Reviewed By: sryap

Differential Revision: D47484716

fbshipit-source-id: ca7be407b7db3b3e0cebc0556928f54498f0d271
  • Loading branch information
q10 authored and facebook-github-bot committed Jul 15, 2023
1 parent 4a3931a commit 55cd84c
Showing 1 changed file with 8 additions and 16 deletions.
24 changes: 8 additions & 16 deletions fbgemm_gpu/codegen/split_embedding_codegen_lookup_invoker.template
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,6 @@ torch.ops.load_library("//deeplearning/fbgemm/fbgemm_gpu:embedding_inplace_updat
{%- endif %}


{%- if is_experimental_optimizer %}
_{{ optimizer }}_first_invocation = True
{%- endif %}


def invoke(
common_args: CommonArgs,
optimizer_args: OptimizerArgs,
Expand All @@ -65,17 +60,14 @@ def invoke(
{%- endif %}
) -> torch.Tensor:
{%- if is_experimental_optimizer %}
global _{{ optimizer }}_first_invocation
if _{{ optimizer }}_first_invocation:
warnings.warn(
f"""\033[93m
[FBGEMM_GPU] NOTE: The training optimizer '{{ optimizer }}' is marked as
EXPERIMENTAL and thus not optimized, in order to reduce code compilation
times and build sizes!
\033[0m""",
RuntimeWarning,
)
_{{ optimizer }}_first_invocation = False
# By design, the warning only shows up once
warnings.warn(
f"""\033[93m
[FBGEMM_GPU] NOTE: The training optimizer '{{ optimizer }}' is marked as
EXPERIMENTAL and thus not optimized, in order to reduce code compilation
times and build sizes!
\033[0m"""
)
{%- endif %}

{%- if has_cpu_support %}
Expand Down

0 comments on commit 55cd84c

Please sign in to comment.