vim: linebreak

16-07-19 16:58

specification:
  - scala.
  - library/middleware to connect input devices to output devices.
  - ar library in java or scala.
  - input devices: myo or joystick.
  - output devices: ar drone.
  - mechanism to map input events to output commands. possibly pattern match.
  - communications should use reactive programming. ReactiveX. XML or JSON.

additional requirements:
  - report
    - comparing reactive approach (as taken) and alternatives
    - describing problems encountered and solved, especially when implementing middleware between myo and ar.

17:08

task: choose ar control library that is either written in java or scala.
  - criteria:
    - ar version supported.
    - ar communications implementation.
  - enumerate and describe choices:
    - ODC (opendronecontrol). AR v1 or v2.
    - JavaDrone. v1. v2 untested.
    - YADrone. v2.

17:20 pause (read up on reactivex on own time). 0:22 elapsed.

http://reactivex.io/intro.html
  - Is not FRP: "One main point of difference is that functional reactive programming operates on values that change continuously over time, while ReactiveX operates on discrete values that are emitted over time."
  - Observable<data> + Observer
  - implements a push reactive pattern.

reactive-streams https://github.com/reactive-streams/reactive-streams-jvm/tree/v1.0.0
  - a specification, not a full implementation.
  - useful article: https://medium.com/@kvnwbbr/a-journey-into-reactive-streams-5ee2a9cd7e29#.eiubhcprh
  - how does it deal with publishers whose event production cannot be controlled (like myo armband)?
    - it uses what is called back pressure (more fitting would be bp awereness), which essentially means both subscriber and publisher participate in flow control.
      - ideally, when publisher or subscriber becomes faster than the other you change modes between push and pull. an ideal stream system would not force you to choose one or the other if you cannot be sure which component will be faster at all times, but instead adapt in real-time.
      - in Reactive Streams this is achieved through Subscription.request(n). if n is more than publisher can handle, the system becomes push based, otherwise it is pull based. actual logic that decides n is up to users of Reactive Streams api.
  - relevant implementations:
    - akka stream
      - http://doc.akka.io/docs/akka/2.4.8/scala/stream/index.html
      - helpful article: https://medium.com/@kvnwbbr/diving-into-akka-streams-2770b3aeabb0#.uiucxmw18
      - looking at documentation seems to have a big community.
      - Working\ with\ rate @ cookbook @ documentation
        - http://doc.akka.io/docs/akka/2.4.8/scala/stream/stream-cookbook.html#Working_with_rate
    - reactor core
      - https://github.com/reactor/reactor-core
      - actively developped as well.
      - unclear whether it is as powerful as akka reactive streams.

20:39 resume project.

choose ar drone library
  - candidates
    - yadrone seems more popular, and possibly development stopped later than ODC.
      - https://github.com/MahatmaX/YADrone
      - tutorial for familiarization:
        - https://vsis-www.informatik.uni-hamburg.de/oldServer/teaching//projects/yadrone/tutorial/tutorial.html
    - ODC is in scala, but it seems to have been scarcely used, judging by only 1 post in the issues section.
  - TODO should inspect ease of extensibility of both.
  - the ar drone library usage seems simpler than the reactive libraries for example, and should be switchable easier.

choose reactive library for io.
  - candidates
    - akka stream
      - seems to have a big community
      - featureful
    - reactor core
      - very actively developed by a small circle. possibly simpler.
  all in all akka stream seems the easier choice.

21:04 pause. elapsed 0:25.

total session time 0:47.

commiting.

16-07-20 19:01

problem: unclear how movement intention is supposed to be communicated to ar drone.
  - it moves as it receives a move command.
  - a single movement will be slower than a command processing cycle.
    - purpose of this is not to have it jitter when a command is being continued (same command is sent multiple times).
  - continuation of movement and prompt control is achieved by multiple subsequent occurances of same command resetting AR's movement duration. this means that movement commands don't stack.
  - TODO how should multiple different simultaneous commands stack?

task: evaluate ar libraries' movement apis.
  - according to doc, yadrone MoveCommand uses tilt angles for horizontal movement
    - https://vsis-www.informatik.uni-hamburg.de/oldServer/teaching//projects/yadrone/javadoc/de/yadrone/base/command/MoveCommand.html
  - odc uses percentage of maximum acceleration in specified direction, which is more idiomatic.
    - https://github.com/opendronecontrol/odc/blob/master/odc/src/main/scala/drone/DroneBase.scala
  - overall odc's scala interface seems more readable.

19:29

setup sbt project with odc and akka stream.
  - odc not on maven, cloning and building.
    - sbt "project backend-ardrone" publish-local
      - fails with xuggle#xuggle-xuggler;5.4: not found.
        - these artifacts are supposed to be in xuggle's own repository, which seems to be specified, but they are not found. 
        - xuggle is deprecated and no longer maintained, the maven repository is dead.
          - https://github.com/artclarke/xuggle-xuggler
        - could remove xuggle code, if video unneccesary, or download xuggle's jar manually.
          - it can be found on YADrone's repo.
            - https://github.com/MahatmaX/YADrone/raw/master/YADrone/lib/xuggle-xuggler-5.4.jar
          - its dependencies can be seen here:
            - https://github.com/artclarke/xuggle-xuggler/blob/master/ivy.xml
          - downloading xuggler jar and adding deps is simpler than altering odc.

20:00 pause. 0:59 elapsed.

21:55 resume.

add xuggler to local ivy repo.
  - mkdir -p ~/.ivy2/local/xuggle/xuggle-xuggler/5.4/jars
  - wget https://github.com/MahatmaX/YADrone/raw/master/YADrone/lib/xuggle-xuggler-5.4.jar
  - build fails the same. probably metadata files are missing.
falling back to adding jar as unmanaged dependency
  - wget xuggler.jar to /lib in odc project.
  - doesn't compile due to xuggler classes missing. it seems that xuggler's internal dependencies have to be pulled in too.
odc seems too much bother. falling back to yadrone.
  - setup a basic project with yadrone.

22:21 doing first commit.

note: yadrone examples and comments seem to suggest that for a command to be continuously executed the thread needs to be blocked. it misled me to think that a given command cannot be executed continuously without blocking the thread. in fact there is no reason why movement commands cannot be given arbitrarily without timing-out the thread for some specific duration.

since, myo is the publisher in the reactive stream, it would have been smarter to start with that.

22:44 task: add myo's java bindings to the build.
  - https://github.com/NicholasAStuart/myo-java
    - "com.github.nicholasastuart" % "myo-java" % "0.9.1"
    - copy over examples namespace; it is not included in jar. purpose to have bozl test with myo.
  - interestingly myo doesn't seem to support linux.
    - https://developer.thalmic.com/downloads
    - https://developer.thalmic.com/forums/topic/18/

23:01 end of session. 1:06 elapsed.

total session time 2:05.

16-07-21 16:14

task: construct a mechanism that can produce intelligable myo events that could potentially be mapped to ar's functions/movement.
  - how does myo communicate a distinct event (like a specific gesture)?
    - read myo's docs.
      - there is a number of preset gestures that it can detect. 
  - types of data provided by myo:
    - orientation
    - acceleration vector (3d)
    - gestural data
    - how it is worn. which hand, orientation on hand.
    - aux events (e.g. connect, disconnect)
  - each event provides a timestamp of when it occured
  - myo can provide feedback to wearer (vibrate audibly)

16:48

gestures are few and very simple and new ones cannot be defined. they can trigger things like take off or landing, but they can hardly navigate myo's flight.
  - myo's orientation can be directly translated to ar's orientation.
    - won't work. you want to steer it, not have it mimick your hand's orientation. so myo's orientation could be converted to a steering vector. the steering vector should equate to portion of some angle (e.g. 45 degrees) relative to position in which freeflight mode was initiated.
      - capture of initial position is important, because otherwise you would have to always be facing the same way.

17:02 pause. 0:48 elapsed.

18:45 resume.

quaternions are used to represent myo's rotation.
  - this game dev article might be useful:
    - http://www.gamasutra.com/view/feature/131686/rotating_objects_using_quaternions.php

MyoPilot white paper:
  - input can be:
    - stop all rotors (emergency) - land - takeoff - hover - progress (move in specified direction)
    - myopilot can combine several movement directions, for example forward and turning left.
  - myopilot pulls commands from inputs every 30ms and if none is received triggers hover.
  - when myo events are pulled, all except latest is dropped, while one-time gestures like (land/take-off) are communicated through flags. in streams this can be achieved by reducing events by dropping non-latest movement data, and if one-time gesture is detected dropping everything except it.
    - this presumes that the streams library provides a way to reduce publisher's queue.
  - change in command/s is communicated by vibrating the myo.
  - since movement is triggered only by gestures (fist and spread fingers), every time they are detected, a reference orientation is captured.
  - use gyroscopic data.
    - describes logic to find correct angle delta (shortest rotation) when it goes through circle's 0th degree.
      - e.g. given that max angle is 100, a1=10, a2=90. d1=a2-a1=80, d2=(a2-100)-a1=-20. dmin(|d1|,|d2|)=d2

19:25 end of day. 0:40 elapsed.

total session time 1:28.

16-07-22 12:24

continued myopilot paper study.
  - input handling.
    - dead zones (tolerance) are applied to motions.
    - rotation is transformed to the range [-1,1]
      - sign(dmin)*((min(|dmin|,max) - deadzone) / (max - deadzone))
        - min(|dmin|,max) reiskia kad d gali buti didesnis nei max.

12:48 pause. 0:24 elapsed.

13:40 task: write preliminary myo DeviceListener.
  - look at example DeviceListener from myo-java and MyoInput from MyoPilot.
  - listen on pose and orientation data.
  - callbacks put event data on stream.

14:15 pause. 0:35 elapsed.

total session time 0:59.

off-clock research.

write stream, with publisher queue reduction.
  - buffering (queue reduction)
    - `.buffer(n, strategy)` adds buffering.
    - http://doc.akka.io/docs/akka/2.4/scala/stream/stream-rate.html
    - examples from cookbook
      - http://doc.akka.io/docs/akka/2.4/scala/stream/stream-cookbook.html#Triggering_the_flow_of_elements_programmatically
    - #Dropping_elements example from cookbook demos `.conflate((lastEvent, newEvent) => event))` should allow to reduce elements with some custom logic.
    - reduction could average out rotation events, instead of dropping non-latest.
      - doesn't sound useful, this is signal smoothing, no reason to apply.
    - Source companion obj also has `.queue` as shorthand. Same as calling `.buffer` on instance, as seen above.
      - not exactly same. .queue makes the flow materialize into SourceQueue which can accept elements after materialization.
  - how to feed events to stream source?
    - look at Source methods. or preferably cookbook in docs.
      - http://doc.akka.io/docs/akka/2.4/scala/stream/stream-cookbook.html
        - no answer.
      - Source methods don't seem to have anything.
      - trait SourceQueue (or SourceQueueWithComplete) can be extended.
        - http://doc.akka.io/api/akka/2.4/index.html#akka.stream.javadsl.SourceQueue
          - actually no, it shouldn't be extended. instead Source.queue should be used and the graph would materialize to a SourceQueue which has `.offer`.
  - how to use events that finish flowing through a graph (stream)?
    - the using procedure can be formed as a Sink, to which Flow connects.
    - `Sink.foreach[String](println(_))`

TODO can YADrone run commands without a loop?
  - in that case the flow could be appropriately throttled and sink could simply call for the appropriate command's execution.

16-07-25

new workweek, manual time from days 21-22 was not added, will add as today.
  - 07-21 1:28
  - 07-22 0:59
  - total 2:27

15:28 jot down stream.
  - unsure of structure yet, sketch in etc.scala
  - possible flow graph:
      events - fan out to gestures and rotation
        gestures repeat last if necessary
          fan in rotation with gesture into command

16:30 pause. 1:02 elapsed.

23:10

to form a command current orientation and gesture is needed. these are distinct events, however. possibly they come at different intervals. how to communicate current orientation and gesture simultaniously?
  - myopilot does this by keeping variables like currentPose, currentOrientation.
  - if event B arrives at a junction/processor that needs A and B, the missing event can be replayed, or we can block waiting for A. thus this can be rephrased as how to remember last event in a (sub)stream?

how to communicate that no messages have been passed in some time. in such case the drone should hover. myopilot uses a pull-only stream, so it can tell if no events are produced. i am using a dynamic push-pull stream; therefore, this won't work. it is intuitive to look for a way for upstream to signal lack-of-messages.
  - the stream sends commands to drone, the drone could time gaps between messages and trigger hover when needed. this would move the logic out of the stream.
  - a hypothetical processor could do the same upstream, and send a NoMessage message when a timeout is reached.

23:46 pause. 0:36 elapsed.

end of day. total session time 1:38.

16-07-26 15:15

how to act on past events?
  http://doc.akka.io/docs/akka/2.4.8/scala/stream/stream-cookbook.html#Create_a_stream_processor_that_repeats_the_last_element_seen
    - describes how to have a processor repeat last element if there is demand.

ZipWith Fan-in junctions lets you use an arbitrary function to merge an element each from multiple streams into one element in one stream. after Broadcasting the events into their substreams, appending a replay processor, they could then be ZipWith'ed into drone's WantedStates. WantedState is a declarative version of a command.

15:35 pause. 0:20 elapsed.

19:41

this demonstrates how to create a graph with multiple sources. even though it uses actorref, this might work with SourceQueue.
  - http://stackoverflow.com/a/30151961/1714997
    - seems outdated. `FlowGraph` is not in the api.

FlowOps.expand can act as a replay stage
  - http://doc.akka.io/api/akka/2.4/index.html#akka.stream.scaladsl.FlowOps@expand[U](extrapolate:Out=%3EIterator[U]):FlowOps.this.Repr[U]

unsure how to extract SourceQueue from a RunnableGraph created with GraphDSL.
  - apparently the sources had to be passed to the graph factory (GraphDSL.create) in order to preserve their materialized values (SourceQueues). `builder.add` does not preserve materialized values.

sketched a more comprehensive stream graph.

TODO even though Sources are defined without buffers, or using 1 as buffer length, more than 1 element seems to be buffered. with throttle this creates a significant delay before the new input is propagated.

21:23 pause. 1:42 elapsed.

end of day. total elapsed 2:02

docs:
  akka stream (and not only doc spec)
    - https://github.com/akka/akka/tree/master/akka-docs/rst/scala/code/docs/stream
  stages overview
    - http://doc.akka.io/docs/akka/2.4/scala/stream/stages-overview.html
  docs
    - http://doc.akka.io/docs/akka/2.4/scala.html
  api
    - http://doc.akka.io/api/akka/2.4/#package

16-07-28 12:25

ThrottleMode.Shaping tells Throttle to throttle by pausing, the other mode -- Enforcing -- simply fails instead of throttling.

if I were to prepend to Throttle a 1-length Buffer set to dropHead, would that keep the replaying Stream from having a lag of 3 elements?
  - this reduced lag to 1, which is expected, given description of ThrottleMode.Shaping.
  - is `.buffer(1,OverflowStrategy.dropHead)` same as `.detach`?
    - no. detach seems to not drop head, instead pauses.

because Throttle pauses in intervals instead of pulling in intervals, it doesn't really make sense when realtime data is needed. Throttle means that all changes have a delay equal to interval.

ideally throttle would override its 1-element-buffer with new arrivals.
  - can this be achieved by setting bucket-size to 0?
    - no effect noticed.

adding FlowOps.log to log elements as they are arriving at headDropper stage.
  - this showed that in fact the delayed (older) value is being sent from upstream, and not being backpressure-buffered in post-junction stages, opposite to what I previously thought. this is supported by examination of Throttle's source code, which indicates that each new arrival replaces the buffered element.
    - place logger between a replay stage and junction.
      - the lag is still present.
    - place '' between source and replay.
      - the logger does not delay; therefore, it is the replay that is lagging.

`FlowOps.keepAlive` injects elements into stream if non arrive from upstream in a given amount of time. can be used to detect idleness (and to request hover).

13:19 pause. 0:54 elapsed.

20:59 resume.

  - what happens if a head dropping 1-element buffer is inserted between replay and junction?
    - no change.
  - trying to use a replay processor from Stream Cookbook.
    - no change.
  - a phenomenon that I can't explain, is that the replay element emits old values until throttle releases a burst. sounds like backpressure, but if replay is being influenced by backpressure, why is it replaying elements?
    - .expand stage is documented to emit when backpressure stops.

at the moment, 1 element lag is tolerable. if throttling to update every ~10-30ms, it might not be noticable.
  - a hack would be to handle throttled updates outside the stream. ie use stream to only get the latest wanted state. not such a big hack when you think about it.

without headDropper between junction and throttle, the lag is 2 elements.
  - why?
  - might not waste time on this. there are more important things to do.

22:25 end. 1:26 elapsed.

end of day. 2:20 total.

16-07-29 16:28

task: connect myo to the stream's input.
  - involves
    - instancing MyoListener with the SourceQueues as returned by graph.run().
    - in MyoListener's callbacks pushing (offering) to stream.
    - in stream
      - ziping gesture and orientation data into WantedState.
      - throttling and sideeffecting
        - or persisting latest WantedState.

task: instance MyoListener with graph.run output and have listener push to stream.
  - 17:01 done

task: zip gesture and orientation data to WantedState.
  - alternative would be, to zip to single event that captures hand pose and myo orientation. and interpret to WantedState later.
    - no need to separate, because neither is reusable.
  - TODO refactor namespaces
    - at the moment most of the stream (safe from throttle and sink) is myo specific, it should be on own namespace.
      - factoring it out would allow usage of multiple myos. not priority.
    - hiding Myo's low-lewel types behind myoar.Myo
  - WantedState empty at the moment. 

17:40 pause. 1:12 elapsed.

22:25

task: write WantedState. ie knowledge about how to act given some Myo input.
  - what abstraction would be appropriate
    - poses are discreet, so they might not need abstraction.
    - both myo and drone orientations are represented in same axes.
      - mapping would be easier if each axis could be well represented as an object.
        - axis would have a
          - constructor(
            - position:[0-1] scalar
            - deadzone:[0-1] scalar.
            - processor(position):processed )
          - processed:[0-1] scalar
        - then
          - OrientationEvent could be represented in axes
            - no change in processing required, compared to converting in WantedState.
          - mappings could be made like this XMyo.mapTo(YDrone)

task: convert quaternion to 3 axis.
  - what's the meaning of the forth quaternion variable.
    - wiki description is obscure.
    - check myopilot code for how it's used.
      - NOTE myopilot uses floats for drone orientation axes, but doubles for myo's input.
      - it uses thirdparty software to deal with quaternions.
  - yadrone has it's own ideas about how to process signals to drone.
    - look at YADrone's CommandManager.move's variants. 
      - movement along some axis can be specified in [0;1] scalar.

23:36 pause. 1:11 elapsed.

end of day. 2:23 elapsed total.

16-07-30 15:58

NOTE in code below, Buffer should be removed, and then there would be only 2 buffers left, on Source (if > 0) and Throttle (if it in fact has a buffer).
  - Source.queue(0, DropHead) ~> Buffer(1, DropHead) ~> Throttle(1, 1s, 1, Shaping) ~> println`
  - I opened a related question on SO
    - http://stackoverflow.com/questions/38644294/how-to-emit-newest-element-every-n-seconds

task: convert quaternion to 3 axis.
  - check myo-java for quaternion utilities.
    - Quaternion and Vector3 classes have helpers.
  - MyoPilot gets quaternion's euler angles using this:
    - https://github.com/rtlayzell/Myo.Net/blob/master/Source/Myo.Net/Myo.Net/Source/Quaternion.cpp#L64
    - does myo-java have equivalent?
      - no
  - look for quaternion utilities library for jvm.
    - using "gov.nasa" % "worldwind"
      - http://builds.worldwind.arc.nasa.gov/worldwind-releases/1.3/docs/api/gov/nasa/worldwind/geom/Quaternion.html
      - `(new Quaternion(1,1,1,1)).getRotationX.radians`

17:01 pause. 1:03 elapsed.

17:35 calculate relative angles (shortest rotation).
  - MyoPilot divides by max. max rotation in radians is probably Math.PI*2.
  - why does myopilot use 3 diffs when calculating relative angles?
    - possibly because angle can be negative.
    - implementing the same.
  - abstract away quaternions in favor of Angles(x,y,z).
  - how to persist reference angles?
    - the stream could capture the reference orientation and replay it into the zipper (giving it a 3rd input). this is the reactive way.
      - to keep from complicating the graph, the junction could be changed to zip to MyoState(pose,orient) which flows into WantedState.
        - between them would be a substream that would capture take-reference and drop-reference events and replay them to WantedState zipper in form of Either[NoReference,Reference(angles)]
    - a faster implementation would be to store refAngles outside stream as Option[Angles].
      - with synchronized methods.
        - implemented.
      - or with akka. tips: http://stackoverflow.com/a/22814369
        - possibly more lightweight

19:05 pause. 1:30 elapsed.

end of day. 2:33 total.

for the sake of quick development, WantedState now persists referenceOrientation and chooses when to capture or drop it. it is also responsible for generally mapping myo's states and axis to drone's.

16-08-02

myopilot mappings
  - fist + tilting arm tilts drone
  - spread fingers + up|down makes drone ascend|descend
  - doubletap triggers lift-off|landing.
    - the way for a gesture to act as a toggle is to only react to it on pose changes. and then keep track of its state.

16-08-03

stripping myo-java's Pose down to PoseType.

Angles companion object's apply should compress radian rotation to [-1,1].
  - Angles should hold rotation in this range only.

how to apply processing to input signal? processing like deadzoning and squaring or cubing.
  - should be rephrased. where to apply this processing?
  - Angles subnamespace holds similar logic.
    - it could be processed when calling the Quaternion->Angles auxiliary conctructor. together with range compression.

removing Axis class, dublicates Angles' role.

16-08-04

unclear why myo-java's hub.run in examples is supplied 50ms as parameter.
  - docs say the parameter is how long the listeners will be active.

YADrone's comms
  - look at tutorial for how to setup
    - https://vsis-www.informatik.uni-hamburg.de/oldServer/teaching//projects/yadrone/tutorial/tutorial.html
  - look at CommandManager for how to control.
    - actually ARDrone is the top level api. look at move3D.
