Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update gestures.md to reflect current state of gesture recognition algorithm #75

Merged
merged 2 commits into from
Jun 23, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 23 additions & 39 deletions gestures.md
Original file line number Diff line number Diff line change
@@ -1,51 +1,35 @@
# Per-device
```mermaid
sequenceDiagram
participant cg as ClientGestureLibrary
participant ci as Interactor
participant cc as ClientController
participant sc as ServerController
participant mh as MessageHandler
participant sv as ServerView
participant svg as ServerViewGroup
participant item

cg ->> ci : recognize gesture
ci ->> cc : call handler received from controller
cc ->> sc : emit message with gesture data
sc ->> mh : handle the gesture
mh ->> sv : transform x,y from view coordinates to workspace coordinates
sv ->> mh : (x, y) point
mh ->> item : emit gesture event

note over sc, item: item is selected using the Track gesture,<br>first point down finds an item to lock, or the view
```

# Multi-device:
```mermaid
sequenceDiagram
participant cc as ClientController
participant sc as ServerController
participant dv as Device
participant gc as GestureController
participant sg as ServerGestureLibrary
participant mh as MessageHandler
participant sv as ServerView
participant svg as ServerViewGroup
participant item
participant ws as WorkSpace
actor user

cc ->> sc : emit pointer event
sc ->> dv : transform x,y from view coordinates to device coordinates
dv ->> sc : (x, y) point
sc ->> gc : process pointer event
gc ->> sg : process pointer event
sg ->> mh : recognize gesture
mh ->> svg : transform x,y point from device coordinates to workspace coordinates
cc ->> sc : transmit pointer event with (clientX, clientY)
sc ->> sv : transform (clientX, clientY) to view coordinates
sv ->> sc : (viewX, viewY)

svg ->> mh : (x, y) point
mh ->> item : emit gesture event
alt if event is pointerdown and the view has no locked item
sc ->> ws : obtain item lock, possibly on the view group
ws ->> sv : set locked item
end

note over sc, item: item is selected using the Track gesture,<br>first point down finds an item to lock, or the view
```
sc ->> user : emit pointer event with (viewX, viewY)
sc ->> dv : transform (clientX, clientY) to device coordinates
dv ->> sc : (deviceX, deviceY)
sc ->> gc : process pointer event
gc ->> mh : recognize gesture with (deviceCentroidX, deviceCentroidY)
mh ->> dv : transform (deviceCentroidX, deviceCentroidY) back to client coordinates
dv ->> mh : (clientCentroidX, clientCentroidY)
mh ->> sv : transform (clientCentroidX, clientCentroidY) to workspace coordinates
sv ->> mh : (viewCentroidX, viewCentroidY)
mh ->> user : emit gesture event with (viewCentroidX, viewCentroidY)

This double transformation is actually correct. The device transformation moves the pointer event to where the device is located physically relative to other devices, that is: where in the server view group the event takes place. Further transforming from where in the view group the event takes place to where in the workspace it takes place is therefore logical. However, if we are to use this same process for single-device gestures we need to provide a unique ServerViewGroup for each ServerView when in single-device mode.
alt if event is pointerup and there are no inputs remaining in the device group
sc ->> sv : release locked item
end
```