Abstract
Human movements are associated with the feeling
that sensory consequences are self-generated (sense of agency). Though it has
shown that the sense of agency for our movements is sensitive to the temporal
coherence between our actions and their outcomes, previous research has focused
on a particular temporal ordering, namely that movements (action) precede
sensory consequences (outcome). Here, we wondered whether SOA could be
felt for movements in the artificial case where this ordering is inversed; when
sensory effects precede their corresponding movement. To test this, we predict
voluntary motor movements (finger deflections) and provide visual sensory
consequences that precede them (negative delay). Furthermore, in the same
participants, we test the standard temporal ordering, investigating trials
where motor movements precede their visual consequences (positive delay). By
performing hundreds of trials per participant and utilizing methods from
psychophysics, we fully characterize SOA as a function of the temporal offset
between visuo-motor actions and effects, offering a comprehensive view of the
dependence of SOA on the temporal coherence of action and effect.
Introduction
Materials and Methods
The principle of the experiment
is to let the participants perform a self-paced simple hand movement (index
finger deflection) within a XX seconds trial and to show them, in place of
their real hand, a virtual hand performing the same movement (3D animation) but
at a random time within the same interval. Two cases can therefore occur:
either the participant is first (movement onset precedes animation onset),
either the animation is first (animation onset precedes movement onset). For
each trial the participants answered a forced choice question about whether the
movement they saw corresponded to the movement they made. The system was
precisely tuned to guarantee an optimal visual correspondence between real and
virtual hands, a precise recording of the timings, and to minimize the time
interval between the two events by influencing the randomization of the
animation onset time.
Participants
For the main experiment 10 healthy, right-handed participants were
recruited (ages XX.X ± X.XXX mean±SD;
XXX females). For the pilot behavioral study, we recruited an additional 14
healthy participants (XX handed; ages XX.X ± X.XXX mean±SD; XXX females). Both studies were undertaken in accordance
with the ethical standards as defined in the Declaration of Helsinki and were approved by the local
ethics research committee at the University of Lausanne.
Material and procedure
Participants sat at a table and
placed their right hand underneath a computer monitor, holding a block
containing a touch sensor. The monitor occluded vision of their forearm and, in
correspondence with their real hand, displayed in stereoscopic 3D a virtual
hand holding a virtual block (Fig. 2A). Head movements were restrained with a chin rest and the experiment
took place in a darkened room.
The
touch sensor, an Arduino™ microcontroller with a 16MHz
sampling rate, allowed for the detection at millisecond precision of the
precise moment the index finger broke contact with a conductive surface when
lifting (movement onset time). The 3D graphics were rendered using OpenGL on an
nVidia Quadro 2000 graphics card using Quad-Buffer extension and nVidia
3DVision glasses for stereoscopic display on a XXX monitor at 120Hz. The
constant latency of the graphics hardware was measured to be 30ms and accounted
for in all estimations of the visual onset times (interval from the trigger
time by the CPU to the actual visual onset of the event on the LCD monitor).
In a short training block,
participants learned to make index finger deflections that mirrored the
velocity and amplitude of an animated finger in the virtual scene. During these 20 trials, no delay
was introduced between movement and visual onset.
The following question was then presented
and participants were asked to remember it: “Did the movement that you did
correspond to the movement that you saw?” No specific explanation was given
about what could correspond or not. Answers
to the question were given with the left hand by pressing buttons of a gamepad
(index for yes, middle finger for no, reaction times ignored).
For each trial of the actual
experiment, the participants saw the virtual scene appear (T0) and were
instructed to lift their finger at the time of their choosing (TM, movement
onset time). The animation of the virtual finger occurred at a random time (TV,
visual onset time). A question screen was
displayed after both events have occurred and the trial ended after
participants’ pressed a button to respond. If participants did not lift their
finger before Tmax (XXs), the protocol moved forward and the trial was
rejected. Timings (movement and visual onsets) and participants’ answers were
logged for all valid trials.
Finally, because using a purely
uniform distribution for the randomization of the visual onset times (between
T0 and Tmax) would require an extremely high number of trials for statistical
analysis of data points with very close timings (TM and TV at less than 100ms
apart), our system had a double strategy. First it tried to anticipate the
movement onset time with the aim of providing visual consequences just prior to
that moment by using a dynamic predictive algorithm based on per-subject
movement history profiles. Second, if participants moved prior to the visual
onset of the predictive algorithm, the visual consequence were presented with a
delay shuffled within a uniform distribution in a small window of interest (0
to 750 ms).
Experimental design and statistical
analyses
The first
experiment consisted in 600 trials executed in 4 blocs, preceded by XX training
trials.
Here we manipulated the coherence (ΔT)
between visual and motor events using a continuous design. To analyze these
continuous data, we binned SOA responses for 20ms ΔT intervals. As our
predictive algorithm used to anticipate movement onset, the number of trials
for each bin was not balanced (Fig. ; Fig. SXX).
To assess the electrical brain activity
associated with the individual visual and motor events, two additional baseline
condition blocks were recorded while participants saw the same virtual scene. In
a first baseline condition, participants were instructed to relax while watching
the fixation cross as the virtual finger moved (random time onset; vision only).
In the second condition, participants were instructed to perform a voluntary
finger deflection as in the main experiment, but no virtual visual counterpart
accompanied the movement (motor only).
Each
experimental block consisted of XXX trials and was repeated XXX times,
resulting in XXX trials.
Pilot Study: Agency for virtual hands
versus virtual objects
· Refer to
supplemental materials for our confirmation / “negative” finding that object vs
body doesn’t matter?
· It’s
smartest if we save these extra findings for the last section in the results,
briefly mention that we did it in the introduction, and describe here that it’s
the same procedure as above, just with 300 trials with hand; 300 with object.