Skip to content
This repository has been archived by the owner on Nov 7, 2024. It is now read-only.

Initialization of orthogonal tensors with respect to a pivot #931

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

pragyasrivastava0805
Copy link

created a new method in tensornetwork/linalg/initialization.py for initializing a random tensor with entries distributed according to normal distribution and performing QR Decomposition on it and returning the tensor Q so that when a tensor is contracted about a given pivot index,the result is orthogonal

@google-cla
Copy link

google-cla bot commented Aug 25, 2021

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here with @googlebot I signed it! and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

ℹ️ Googlers: Go here for more info.

@pragyasrivastava0805
Copy link
Author

@googlebot I have signed

@google-cla google-cla bot added cla: yes and removed cla: no labels Aug 26, 2021
Copy link
Contributor

@mganahl mganahl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the PR! Sorry for the delay, I left some comments for you to fix!

if ((np.dtype(dtype) is np.dtype(np.complex128)) or
(np.dtype(dtype) is np.dtype(np.complex64))):
q,r= decompositions.qr(np,np.random.randn(
*shape).astype(dtype) + 1j * np.random.randn(*shape).astype(dtype),pivot_axis,non_negative_diagonal)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is an else clause missing, otherwise line 804 gets overwritten

for dtype in dtypes[backend]["rand"]:
tnI = tensornetwork.initialize_orthogonal_tensor_wrt_pivot(
shape,
dtype=dtype,pivot_axis,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that line should throw a syntax error because your passing an argument between named arguments

shape,
dtype=dtype,pivot_axis,
seed=seed,
backend=backend,non_negative_diagonal)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

dtype=dtype,pivot_axis,
seed=seed,
backend=backend,non_negative_diagonal)
npI = backend_obj.initialize_orthogonal_tensor_wrt_pivot(shape, dtype=dtype, pivot_axis, seed=seed,non_negative_diagonal)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove the function from the backend

def test_initialize_orthogonal_tensor_wrt_pivot(backend):
shape=(5, 10, 3, 2)
pivot_axis=1
seed = int(time.time())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls use deterministic seed initialization

@@ -200,3 +201,7 @@ def random_uniform(shape: Sequence[int],
the_tensor = initialize_tensor("random_uniform", shape, backend=backend,
seed=seed, boundaries=boundaries, dtype=dtype)
return the_tensor
def initialize_orthogonal_tensor_wrt_pivot(shape=Sequence[int],dtype:Optional[Type[np.number]]=None,pivot_axis:int=-1,seed=Optional[int]=None,backend: Optional[Union[Text, AbstractBackend]] = None,non_negative_diagonal:bool=False) ->Tensor:
the_tensor=initialize_tensor("randn",shape,backend=backend,seed=seed,dtype=dtype)
q,r=linalg.qr(the_tensor,pivot_axis,non_negative_diagonal)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

us _ instead of r (unused variable)

@@ -200,3 +201,7 @@ def random_uniform(shape: Sequence[int],
the_tensor = initialize_tensor("random_uniform", shape, backend=backend,
seed=seed, boundaries=boundaries, dtype=dtype)
return the_tensor
def initialize_orthogonal_tensor_wrt_pivot(shape=Sequence[int],dtype:Optional[Type[np.number]]=None,pivot_axis:int=-1,seed=Optional[int]=None,backend: Optional[Union[Text, AbstractBackend]] = None,non_negative_diagonal:bool=False) ->Tensor:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if we could find a less clunky name. Some possibilities that come to my mind are random_orthogonal or random_isometry @alewis?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls add a docstring that explains what the function is doing, what the arguments are, and what the returned values are.

@@ -1044,3 +1044,6 @@ def eps(self, dtype: Type[np.number]) -> float:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you add this function to the backend? I don't think we need it here

@@ -795,3 +795,13 @@ def eps(self, dtype: Type[np.number]) -> float:
float: Machine epsilon.
"""
return np.finfo(dtype).eps
def initialize_orthogonal_tensor_wrt_pivot(self,shape=Sequence[int],dtype:Optional[Type[np.number]]=None,pivot_axis:int=-1,seed=Optional[int]=None,backend: Optional[Union[Text, AbstractBackend]] = None,non_negative_diagonal: bool = False):->Tensor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this function

seed=seed,
backend=backend,non_negative_diagonal)
npI = backend_obj.initialize_orthogonal_tensor_wrt_pivot(shape, dtype=dtype, pivot_axis, seed=seed,non_negative_diagonal)
np.testing.assert_allclose(tnI.array, npI)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls replace with a test that checks if the initialized tensor has the desired properties

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants