<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://vrl.cs.brown.edu/wiki/index.php?action=history&amp;feed=atom&amp;title=CS295J%2FProject_Schedule%3ACaroline%2FFields_Interview</id>
	<title>CS295J/Project Schedule:Caroline/Fields Interview - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://vrl.cs.brown.edu/wiki/index.php?action=history&amp;feed=atom&amp;title=CS295J%2FProject_Schedule%3ACaroline%2FFields_Interview"/>
	<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule:Caroline/Fields_Interview&amp;action=history"/>
	<updated>2026-04-20T01:41:33Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.1</generator>
	<entry>
		<id>http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule:Caroline/Fields_Interview&amp;diff=5670&amp;oldid=prev</id>
		<title>Caroline Ziemkiewicz: New page: What&#039;s the data? What do you want out of it?   Q&#039;s: language processing psycholinguistic questions how social/emotional factors affect language processing  ERP is good for temporal sensiti...</title>
		<link rel="alternate" type="text/html" href="http://vrl.cs.brown.edu/wiki/index.php?title=CS295J/Project_Schedule:Caroline/Fields_Interview&amp;diff=5670&amp;oldid=prev"/>
		<updated>2011-11-03T20:34:50Z</updated>

		<summary type="html">&lt;p&gt;New page: What&amp;#039;s the data? What do you want out of it?   Q&amp;#039;s: language processing psycholinguistic questions how social/emotional factors affect language processing  ERP is good for temporal sensiti...&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;What&amp;#039;s the data? What do you want out of it? &lt;br /&gt;
&lt;br /&gt;
Q&amp;#039;s: language processing&lt;br /&gt;
psycholinguistic questions&lt;br /&gt;
how social/emotional factors affect language processing&lt;br /&gt;
&lt;br /&gt;
ERP is good for temporal sensitivity - can go as far as millisecond level&lt;br /&gt;
interested in which *stages* of processing are being affected&lt;br /&gt;
&lt;br /&gt;
MRI and MEG (magnetic encephalography) - similar to EEG&lt;br /&gt;
MEG differences: skull and scalp less opaque to magnetic imaging&lt;br /&gt;
&lt;br /&gt;
combining data sources is difficult&lt;br /&gt;
complex math involved&lt;br /&gt;
mostly gathered and analyzed separately; Phil Holcomb at Tufts working on project to bring data together, but on a long time scale&lt;br /&gt;
&lt;br /&gt;
they mostly gather EEG data at 5ms intervals; higher-res is possible, but not helpful&lt;br /&gt;
&lt;br /&gt;
-- experiment room --&lt;br /&gt;
&lt;br /&gt;
experiment: sentence comprehension task, varying a single word between positive, negative, and neutral across two conditions (sentence in third person or second person) &lt;br /&gt;
do people process the emotional stimulus differently when sentence is about them vs. other person?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
output they see during experiment; waves from all the different electrodes (N? about 20) at 1.2s time scale&lt;br /&gt;
they&amp;#039;ll cut up signal based on events later&lt;br /&gt;
&lt;br /&gt;
waveform is very noisy, so you need a lot of trials - 30 events for each condition&lt;br /&gt;
ERP (event related potential) - way to analyze EEG&lt;br /&gt;
&lt;br /&gt;
need to watch output during experiment to prevent bad data&lt;br /&gt;
- e.g. artifacts (like blinks) &lt;br /&gt;
can tell participants to stop blinking if it keeps happening!&lt;br /&gt;
&lt;br /&gt;
output is a distribution of activation (&amp;quot;smear&amp;quot;) &lt;br /&gt;
center of distribution is not necessarily the actual location in the brain... e.g. N400 response shows up on the opposite side of the skull from the brain area that generates it&lt;br /&gt;
inverse problem - infinite # of brain areas where a signal could have originated&lt;br /&gt;
want to localize within % of probability&lt;br /&gt;
&lt;br /&gt;
combining this data with MRI or MEG can add spatial information (but again, hard)&lt;br /&gt;
distribution at least tells you if two events are happening in different areas or not&lt;br /&gt;
&lt;br /&gt;
-- data processing and analysis --&lt;br /&gt;
&lt;br /&gt;
lots of script files for data collection and pre-processing&lt;br /&gt;
&lt;br /&gt;
events are coded according to a coding scheme for study events - just an excel file&lt;br /&gt;
&lt;br /&gt;
BDF: bins of interest &lt;br /&gt;
select out time slices that represent events&lt;br /&gt;
categorize events based on the study conditions and other slices of interest&lt;br /&gt;
script generates the bins&lt;br /&gt;
(so each bin is a collection of small time-slices of waveform data around an experimental event)&lt;br /&gt;
&lt;br /&gt;
scripts are just straight text, and they all seem to be processed in homebrew software made by a senior researcher in the lab&lt;br /&gt;
lots of different processing apps; users just drag and drop script files onto the appropriate executable. &lt;br /&gt;
&lt;br /&gt;
bin generation outputs count of bins, events and .... what&amp;#039;s that word supposed to be? wives? elves? &lt;br /&gt;
useful for error checking&lt;br /&gt;
&lt;br /&gt;
RawView -&amp;gt; viewer for the raw waveform, also used for error checking&lt;br /&gt;
sample events&lt;br /&gt;
semi-automatic artifact detection (e.g. blinks)&lt;br /&gt;
researcher checks the automatic output for accuracy and changes thresholds as needed.&lt;br /&gt;
usually do need to change something&lt;br /&gt;
could do it all by hand, but time and consistency are problems&lt;br /&gt;
they prefer false negatives to false positives; better to throw out good data than keep bad (similar to CBDM researchers)&lt;br /&gt;
artifacts cause really strong signals, so they can throw off the analysis by a lot&lt;br /&gt;
checks errors by viewing the raw waveforms&lt;br /&gt;
eric seems to have an intuitive sense of what good and bad wave patterns look like&lt;br /&gt;
&lt;br /&gt;
this process takes 10-15 minutes per participant (varies)&lt;br /&gt;
&lt;br /&gt;
another script generates averages across each bin &lt;br /&gt;
check data again, and if bins seem small, return to rawview and filter on bin of interest&lt;br /&gt;
want to keep the number of events in each bin over 30+ or N gets too small for confidence&lt;br /&gt;
so if the number gets small you want to make sure no good data is getting thrown out and adjust the threshold again if you can&lt;br /&gt;
&lt;br /&gt;
CalNormalize -&amp;gt; calibrate waveforms by a wave of known characteristics, then scale everything&lt;br /&gt;
bandpass and lowpass filter to remove noise&lt;br /&gt;
fourier taking out everything over 15hz&lt;br /&gt;
&lt;br /&gt;
(again, these are all separate executables and he&amp;#039;s just dragging and dropping script files on them)&lt;br /&gt;
&lt;br /&gt;
that&amp;#039;s the end of pre-processing, now you actually look at the data...&lt;br /&gt;
&lt;br /&gt;
ERPView&lt;br /&gt;
&lt;br /&gt;
this view shows the waveforms from each electrode, arranged in a roughly spatially-accurate formation&lt;br /&gt;
average waveform taken across participants&lt;br /&gt;
you can look at one bin or split it into multiple bins (e.g. conditions) per electrode&lt;br /&gt;
looking for components - common patterns in the waveform, associated with a type of brain activity&lt;br /&gt;
e.g. N400 -&amp;gt; associated with semantic processing&lt;br /&gt;
P600 -&amp;gt; late positivity, associated with deeper encoding&lt;br /&gt;
&lt;br /&gt;
components have been discovered over the history of EEG research&lt;br /&gt;
not always 100% sure this component is a known component you&amp;#039;re comparing it to... need to use distribution to check&lt;br /&gt;
certain components associated with particular electrode locations&lt;br /&gt;
&lt;br /&gt;
also focus in and subtract two conditions to see difference between them&lt;br /&gt;
generate contour maps (rainbows kill people!!) over a head-shaped glyph - show activation distribution&lt;br /&gt;
can be misleading to novices, since activation location doesn&amp;#039;t really correspond to brain areas&lt;br /&gt;
all this information can help confirm identity of a component&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
eric has trouble explaining what the contour maps are for&lt;br /&gt;
helps to connect spatial and temporal... but again, spatial is kind of misleading&lt;br /&gt;
separate effects across conditions&lt;br /&gt;
&lt;br /&gt;
can look at individuals or averages&lt;br /&gt;
can switch channels&lt;br /&gt;
&lt;br /&gt;
ERPManipulate -&amp;gt; more processing, missed something here&lt;br /&gt;
&lt;br /&gt;
ERPMeasure -&amp;gt; converts all the data to numbers for stats analysis&lt;br /&gt;
components based on time and distribution&lt;br /&gt;
scale by a baseline&lt;br /&gt;
&lt;br /&gt;
lots of raw text data&lt;br /&gt;
script file within SPSS used to process text data&lt;br /&gt;
&lt;br /&gt;
need to divide up scalp spatially&lt;br /&gt;
don&amp;#039;t want to treat every electrode as a factor in the ANOVA&lt;br /&gt;
but location at some level should be a factor&lt;br /&gt;
need a coarser spatial organization&lt;br /&gt;
but no standard way to divide up the head, you just make up a scheme and argue for it in the paper&lt;br /&gt;
regions analysis vs. column analysis&lt;br /&gt;
&lt;br /&gt;
VBA script does SPSS processing so it&amp;#039;s all invisible to the researcher&lt;br /&gt;
extracts an organized excel file&lt;br /&gt;
just lists significant effects, then followup tests to each effect &lt;br /&gt;
highlight shade to show significance level&lt;br /&gt;
&lt;br /&gt;
time windows&lt;br /&gt;
needs to be adapted per analysis&lt;br /&gt;
&lt;br /&gt;
dependent variable is either average amplitude over time range or peak amplitude&lt;br /&gt;
(avg amplitude is better, peak in an average is problematic)&lt;br /&gt;
also look at latency&lt;br /&gt;
&lt;br /&gt;
do this whole process after each participant, then more analysis when all the data is in&lt;br /&gt;
go through the data lots of different ways&lt;br /&gt;
figure out time windows, scalp divisions&lt;br /&gt;
&lt;br /&gt;
statistics is confirmatory; the bulk of the analysis is visual&lt;/div&gt;</summary>
		<author><name>Caroline Ziemkiewicz</name></author>
	</entry>
</feed>