Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TraceReplayer Questions #26

Open
jmahmud47 opened this issue Jun 21, 2021 · 3 comments
Open

TraceReplayer Questions #26

jmahmud47 opened this issue Jun 21, 2021 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@jmahmud47
Copy link
Collaborator

I have done the task of creating the execution.json file for the trace-replayer. I have pushed the changes to Github. I will make some changes to make a more clean code. There are some current scenarios that I want to discuss with you. For example,

i. For searching activity (image attached), I cannot extract the GUI component because to come to this page and to write something in the search text component, we clicked a search button which is a different GUI component. When we type something in the search text button, we select one of the predictive texts from those options. However, we cannot extract the predictive text from the uiautomator. Therefore, I did not take screenshots for the search option.

search

ii. For the email or password, I took the screenshots twice: (i) when clicking on the GUI component , (ii) after writing the texts. Please let me know if I need to make any change to that.

iii. For the swipe option, we took the screenshots where the screen is clicked initially with the cursor. If the initially selected option is a GUI component, we get the screenshot of the GUI component. However, in some cases, we got a cropped image, because the GUI component is not selected properly.

iv. For typing texts, each character is considered as a click event using coordinates. I did not take screenshots when typing all characters. I took the screenshots only before and after writing texts in a GUI component.

I have attached one result for the trace-replayer here. Please let me know your opinion on the above mentioned scenarios.

traceReplayer.zip

@bee-tool bee-tool bot added the question Further information is requested label Jun 21, 2021
@kpmoran
Copy link
Collaborator

kpmoran commented Jun 24, 2021

Hi all,

I will echo what I said in Slack here for posterity:

Basically two main questions:

  1. how should we handle auto-complete text (it doesn’t show up in ui automator)
  2. what should the target component be for swipe events

For 1 I would suggest just having a “type” event

For 2 it is more tricky, but I think the most consistent way to do it would be to take the top leaf level component (edited)

Only issue is that sometimes this is not the most intiutive. e.g., swiping on a list view might make more sense than a list item

@kpmoran kpmoran assigned ojcchar and kpmoran and unassigned kpmoran and ojcchar Jun 24, 2021
@ojcchar
Copy link
Contributor

ojcchar commented Jun 24, 2021

@jmahmud47 regarding your comments above (based on our discussion on Slack)

i. We can have a "type" event for this case for the text field that uses the selected suggestion
ii. It sounds ok to me
iii. we don't need a component for swipes. Let's capture the direction and intensity (if possible) of the swipes.
iv. it sounds ok to me.

When these changes are done, can you please generate test data for Mileage, Gnucash, and Droidweight, so that we incorporate this info into the graph and test the chatbot?

@ojcchar ojcchar closed this as completed Jun 24, 2021
@ojcchar ojcchar reopened this Jun 24, 2021
@jmahmud47
Copy link
Collaborator Author

jmahmud47 commented Jun 30, 2021

@kpmoran and @ojcchar Searching is implemented as a type event and swipe is implemented including direction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants