What is Attention?

James (1890, pp. 403-404):

Everyone knows what attention is. It is the taking possession of the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others …”

A little bit more recently Shiffrin 1988, p. 739):

 “Attention has been used to refer to all those aspects of human cognition that the subject can control … and to all aspects of cognition having to do with limited resources or capacity, and methods of dealing with such constraints”

And even more recently Cowan (1995) writes about selective attention’ in a sense that is close to James’ definition of attention. The selective attention is not necessarily voluntary. Also selective attention is a limited capacity process.

 

Some notes on experimental design in Psychology

Major Confounding factors

Maturation – Mainly concerns longitudinal studies (and children) – as subjects grow older between pre- and posttreatment/test it may affect the results. The children, for instance, might get more sophisticated, get more experience, bigger, stronger, and so on, as the age. Natural maturation halso happen in other subjects. When in a new environment adults make predictable changes or adjustments over time. Diseases usually have predictive courses. This can lead to the fact that observed changes over time may be due to maturation rather than the independent variable.

History – During the course of a study, independent events that will affect the outcome can occur. Generally, threats to internal validity are due to history when there are long times between pre- and posttest measurements.

Testing – repeated testing of participants can threaten the internal validity, because the participants might get more skilled through repeated training on the measurement instrument.

Instrumentation – Findings can be due to changes in the measuring instrument over time rather than due to IV.

Regression to the Mean – when selecting subjects on the basis of their scores on a measure is extremely high or low they are usually not that extreme in a second testing. That is, their scores will regress to toward the mean. The amount of regression is contingent upon how much the performance of the test is due to variable factors. These variable factors can be, i.e., amount of study. More variable factors equals more regression.

Selection – These confounding factor appears when, for instance, comparing groups that are not equivalent before the manipulation begins.

Attrition – Attrition occurs when participants that drop out of the study due to some biasing factor. For instance, if participants drop out from one group but not from another (or not as much) one can lose important characteristics etc. It is important to not create situations or use procedures that can bias some participants against completing the study, and changing the outcome.

Diffusion of Treatment – If participants from that have different experimental conditions are able to talk with each other, some can expose the procedures to others. Test-participants might talk to control-participants that might not be aware that they are in a control group. These types of information exchanges are called diffusion of treatment and can affect the data such that the differences between groups disappear.

Sequence effects – experiences with one condition might affect responses to later conditions. If condition order is ABC systematic confounding can occur. For instance, performance in BC might reflect both the effect of the condition or the effect of already been exposed to A. To get rid of sequence effect one use more than one order.

Subject and Experimenter Effects

Expectations and biases of both the experimenter and the subjects can systematically affect the results of a study in subtle ways, thus reducing validity of the study.


Subject Effects – Participants in an experiment are not completely naïve. That is, they will have understandings, ideas and maybe misunderstandings about what to expect in the study. Different people have different reasons for participating. These reasons can be money, course credit, etc. Others might participate because they hope to learn something. Participants volunteer and carry out their role based on different motivations, understandings, expectations, and biases, which all can affect the outcome of a study. An experimental setting is not natural. When being observed people might behave differently than if they were not observed. This can lead to subject effects. Subject effects refer to any changes in behavior that was due being part of an experiment rather than experimental variables. Demand characteristics are when participants get cues on how they are expected to behave (according to hypotheses, etc). Demand characteristics usually occur unintentionally. Placebo effect, a related phenomenon, occurs when participants are expecting a specific effect.

Experimenter effects – concerns with any biasing effects that are due to actions of the researcher. Experimenter expectancies – the experimenter’s expectation about the outcome of the study. These expectations might cause researchers to bias results in many ways. The experimenter can influence the participant’s behavior in favor of the hypotheses, cherry picking data and statistical methods, and interpret results in a biased manner. 

Example of ways experimenter can influence the participant: Presenting cues in the form of intonation, facial expressions, change in posture, verbally reinforce some responses and not others, or incorrectly record participants’ responses.

Experimental designs

Pre-posttest with control group controls for history and maturation.

Variance:

  • Systematic between-groups variance
    • Difference between groups could be due to
      1. Effect of the independent variable (experimental variance which is what we want!)
      2. Effects of confounding variables (extraneous variance)
      3. A combination of (1) and (2)

Natural variability that is due to sampling error will increase the group variability some.

  • Nonsystematic Within-Groups Variance
    • Error Variance – non-systematic within-groups variability.
      Due to random factors affecting some participants more than other within a group rather than systematically reflecting all members of a group. Error variance can increase by factors that are not stable, such as participant feeling ill or uncomfortable participating… Experimenter and equipment variations can also cause measurement errors for some participants.

In experimentation, each study is designed so as to maximize experimental variance, control extraneous variance, and minimize error variance.”

Maximizing experimental variance. Experiment variance is due to independent variables (IV) effect on dependent variables (DV). At least to levels of de IV should be present in an experiment. Experimental conditions need to be distinct! It can be useful to have a manipulation check to see that manipulation had the planned effect on p’s. One way to check if to use ratings.

To efficiently control for extraneous variables and minimize their possible different effects on the groups we must be sure that (1) the two groups (experimental and control) are AS similar as possible, (2) the groups are treated in exactly the same way EXCEPT for the IV manipulation.

Ways to control extraneous variance:

  1. Random assignment to groups decreases probability that the groups will differ – Best method
  2. Homogenous sample
  3. Confounding variables can be built into the experiment as an additional IV
  4. Matching or Within-subjets deisgn

Minimizing Error Variance.

Large error variance can hide differences between conditions due to the experimental manipulations. Measurement error is one error variance source. If participants does not respond consistent from trial to trial due to such factors the instrument is unreliable. To minimize sources of error variance carefully controlled conditions of measurement and have reliable instruments. Another source of error variance is individual differences. These types of variances minimized by within-subjects designs.

Experimental designs – Randomize when possible!

The four basic designs to test single IV using independent groups:

  1. Randomized, posttest-only, control-group design
    Here we have two groups: Group A and Group B. The treatment in the groups are compared in the post-test only. 
     This is made to test hypothesis that IV affect dependent measurements.
    Random selection will protect external validity. Furthermore, attrition and regression to the mean are also reduced by random assignment of participants (i.e., both groups will have [roughly] the same amount of extremes). Threats to internal validity is from instrumentation, history, and maturation are minimized due to inclusion of control group.
  1. Randomized, pretest-posttest, control-group design
    Improvement of R pt-only c-g design (the one above). Pretreatment/test
  2. Multilevel, completely randomized, between-subjects design
  3. Solomon’s four-group design. Pretests will affect participants’ responses to the treatment or to the posttest. Pretest can interact with the experimental manipulation which will produce confounding interaction effects.

T-test evaluates the size of the difference between the means of the two groups. The two means are divided by an error term. The error term is a function of the variance scores within each group and the sample sizes. Easy applied, common, and useful to test differences between two groups.

Analysis of Variance (ANOVA)
For multilevel designs with more than two groups. One-way ANOVAQ – only one independent variable. ANOVA uses both the within-groups variance and the between-group variance. Within-groups variance is a measure of nonsystematic variation within a group – error or chance variation among individual participants within a group. Due to factors such as individual differences and measurement errors. Between-groups variance is representing how variable group means are. Is a measurement of both systematic factors that affect the groups differently and of variation due to sampling error. The systematic factors include experimental variance and extraneous variance. Furthermore it also represents how variable the group means are. Approx. same means = small between-groups variance -> large difference in group means = between-groups variance is large.

The F-test is used to get statistical significance from an ANOVA. The F-test involves the ratio of the between-group mean square to the within-groups mean square.

F= mean square between groups/mean square within groups

The ratio can be increased by either increasing the between-groups mean square or by decreasing the within-groups mean square. Between-group mean squares increases by maximizing the differences between groups. The within-groups mean square is minimized by controlling as many potential sources of random error as possible. Maximization of experimental variance and minimization of error variance is what we want!

Rejection by the hypotheses that there are no systematic differences between groups UNLESS the F-ratio is larger than we would expect by chance alone.

UPDATE: I found an exceptional post on how to do one-way ANOVA using Python. In fact, there are 4 different Python methods for doing a Python ANOVA: One-Way ANOVA in Python.

Planned comparison is done to probe possible significance differences between the means. The F­-ratio will only tell us that there IS a difference. Not in which direction or between which groups. This is done by the means of planned comparison/a priori comparison/contrast.

 

Delta Plots on Response time data using Python

In this post we are going to learn how to do delta plots for response (reaction) time data. Response time data are often used in experimental psychology. It is the dependent variable in many experiments that aim to draw interference of cognitive processes.

Delta plots is a visualization method (Pratte, Rouder, Morey, & Feng, 2010;Speckman, Rouder, Morey, & Pratte, 2008). These visualizations (i.e., the plots) are created using the quantiles of the resposne time distribution. Research has indicated that even without a precise statistical inference test, delta plots can give the researcher key information concerning the underlying mechanisms of tasks thought to assess constructs such as, for instance, cognitive control and inhibition (Pratte, Rouder, Morey, & Feng, 2010)

import matplotlib.pyplot as plt

import numpy as np

data = {"x1":[0.794, 0.629, 0.597, 0.57, 0.524, 0.891, 0.707, 0.405, 0.808, 0.733,
    0.616, 0.922, 0.649, 0.522, 0.988, 0.489, 0.398, 0.412, 0.423, 0.73,
    0.603, 0.481, 0.952, 0.563, 0.986, 0.861, 0.633, 1.002, 0.973, 0.894,
    0.958, 0.478, 0.669, 1.305, 0.494, 0.484, 0.878, 0.794, 0.591, 0.532,
    0.685, 0.694, 0.672, 0.511, 0.776, 0.93, 0.508, 0.459, 0.816, 0.595],

    "x2":[0.503, 0.5, 0.868, 0.54, 0.818, 0.608, 0.389, 0.48, 1.153, 0.838,
    0.526, 0.81, 0.584, 0.422, 0.427, 0.39, 0.53, 0.411, 0.567, 0.806,
    0.739, 0.655, 0.54, 0.418, 0.445, 0.46, 0.537, 0.53, 0.499, 0.512,
    0.444, 0.611, 0.713, 0.653, 0.727, 0.649, 0.547, 0.463, 0.35, 0.689,
    0.444, 0.431, 0.505, 0.676, 0.495, 0.652, 0.566, 0.629, 0.493, 0.428]}

labels = list('AB')

fig = plt.figure(figsize=(10, 10), dpi=100)

ax = fig.add_subplot(111)

bp = ax.boxplot([data['x1'], data['x2']])

boxplot_response_time_python_matplotlib_delta_plot

Here is the code that will create a Delta plot on our response time data above:

p = np.arange(10, 100, 10)

df=np.percentile(data['x1'], p) - np.percentile(data['x2'], p)
av=(np.percentile(data['x1'], p)+np.percentile(data['x2'], p))/2

plt.figure()
fig = plt.figure(figsize=(12, 9), dpi=100)

plt.plot(av,df, 'ro')
plt.ylim(-.05,.25)
plt.ylabel('Response Time Difference (sec)')
plt.xlabel('Men Response time (sec)')
plt.show()

Delta Plot

Delta Plot using Python
Delta Plot

That was pretty simple. In this tutorial you have learned how to create a delta plot that will lend you support for drawing interference of things such as inhibition or cognitive control. Drawing inference is something you will have to do for yourself! But you can have a look at the references below for more information. They will probably help.

References

  • Pratte, M. S., Rouder, J. N., Morey, R. D., & Feng, C. (2010). Exploring the differences in distributional properties between Stroop and Simon effects using delta plots. Attention, Perception & Psychophysics, 72(7), 2013–25. http://doi.org/10.3758/APP.72.7.2013
  • Speckman, P. L., Rouder, J. N., Morey, R. D., & Pratte, M. S. (2008). Delta Plots and Coherent Distribution Ordering. The American Statistician, 62(3), 262–266. http://doi.org/10.1198/000313008X333493

OpenSesame

Great list for you if you would like to learn how to create experiments using OpenSesame.  You will find tutorials in the form of text, videos, and links to blog posts and other useful resources.

OpenSesame is a presentation and reaction-time measurement software with Python scripting option Download, tutorial, information about Python scripting etc. Forum: Review: Citation and publications…

Källa: OpenSesame

Introduction Video to Statsmodels

I found this introduction to Statsmodels. For you that don’t know Statsmodels is a great Python library for conducting statistical analysis. Many common methods are covered by the package. If you want to learn more Python and Data Analysis you will most likely enjoy this Youtube video:

I surely learned more Python data analysis by watching it anyway. It makes some tasks a lot easier and makes Python more similar to R.

Inverse Efficiency Score

In this post I will briefly discuss a way to deal with speed-accuracy trade offs in response times experiments (RT). When conducting RT experiments and collecting responses such as correct and incorrect responses to visual stimuli one can at times find that under certain conditions people respond slower but more accurate. For instance if you have a condition with distractors and people are responding slower everything may seem fine. However, if you look at the accuracy data (proportion of correct responses) you may see that people responded faster. The inverse efficiency score combines speed and error. IES is suggested to be an “observable measure that gauges the average energy consumed by the system over time”. It is calculated by dividing RT by 1 – the proportion of Errors (PE), or the proportion of correct responses (PC)If two conditions have the same mean RT but differ in PE, IES of the condition that has the highest PE will increase more than the IES of the condition with the lower PEInterestingly, if there is a speed and accuracy trade-off, the IES will even out the PE differences. It is not always better to use IES. Seemingly, a lot of changes can happen when using IES. This is because it includes two variables and their sampling error. Therefore, the variability of the measure increases. Furthermore, whether the division RT by PC is a good reflection of the relative weights of speed and accuracy is unclear.

Quality Python sources

The Python community is exceptional at sharing specified resources and supporting beginners discover ways to code with the language. There are so many of sources obtainable although that it can be tough  to locate them all of them.

This post page aggregates the exceptional Python assets with descriptions of what they offer to readers.

Python for specific occupations

Python is powerful for many professions. in case you‘re in search of to use Python in a particular subject, this type of pages may be appropriate for you.

  • Python for Social Scientists here you can find a textbook, a course and slides for a university course that teaches social scientists to use Python. Pretty awesome for Psychologists (or other Social scientists, of course!)
  • Practical Business Python is a blog that covers topics such as automation of the creation of large Excel spreadsheets. Also covers how to perform analysis when your data is locked in Microsoft Office files.
  • Python for the Humanities is a basic Python textbook and course. It covers text processing. It will pretty quick become hard so after you have read the first chapter you may want to consult another introductory Python course or book at the same time.

Some iPython/Jupyter Notebooks:

I have found some useful iPython/Jupyter notebooks for learning Python. They will be categorized into fields of research.

Psychology/Cognitive Neuroscience

First we start with some specific on Psychology and neuroscience:

  • Python for Vision Research. Here you will find a three-day course for vision researchers. Focus is on building experiments with PsychoPy and psychopy_ext, learning the fMRI multi-voxel pattern analysis with PyMVPA, and understanding image processing in Python.
  • Modeling psychophysical data with non-linear functions by Ariel Rokem. General knowledge in Python is needed. You will learn a definition of modeling and get to know why models are useful, different fitting strategies, how to fit a simple model, and model selection, and more.

Statistics-related notebooks

Bayesian Data Analysis Using PyMC3

Introduction to Linear Regression – Linear regression is a commonly used statistical method in social sciences (e.g., Psychology)

Great packages/libraries to use for Data Analysis

I think that I also need to mention som great Python packages that makes data analysis way more easier in Python.

First, there is IPython. IPython is a command shell for interactive computing in multiple programming languages. It was, however, first developed for the Python programming language. For doing Scientific computing IPython is really a must (replacing the interpreter that comes with installation of Python). It offers enhanced introspection, rich media, additional shell syntax, tab completion, and rich history. You can also, as been seen above, create IPython notebooks that are, basically, html. Great! iPython is known as juPyter nowadays: Jupyter.

Here is a user manual that I got from asking a question at a blogJupyter Manual. Looks very promising.

pandas is an open source, BSD-licensed library presenting highperformance, easy-to-use statistics structures and data analysis equipment for the Python. Pandas is enabling you to carry out your data analysis workflow in Python while not having to have an extra domain specific language like R. Pandas let you do summary statistics using Python very easy. If you are familiar with R you will like the methods head, describe, and so on.

matplotlib is a 2D plotting library which enables you to create figures in publication quality. You can also create interactive environments across platforms. atplotlib can be used in python scripts, the python and iPython shell (similar to MATLAB or Mathematica), web application servers, and more. matplotlib aims to make easy things easy and hard things possible. Generation of plots, histograms, power spectra, bar charts, errorcharts, scatterplots, etc, can be carried out with just a few lines of code. For simple plotting the pyplot interface is very similar MATLAB.  Especially, when used within IPython, Spyder, or Rodeo (the last two are great Python IDEs).

Blogs

There are, of course, a plethora of Python blogs. I will mention a few of them that are related to cognitive science.

There are so many more resources. Hopefully, I will update this post or make a new one later. Hope it helps you in the Python jungle, though!

 

Programming in Psychology

Psychology graduates often need to be, at least, computer literate on the basic level. Computers are, as in most field, used in many ways. The selection and learning of relevant packages for tasks we do are needed. However, many few psychology graduates do know computer coding. Many will have great knowledge in word-processing and statistical analysis. As you will see knowing programming will get you and advantage. You will be able to carry out tasks that not many other Psychological researchers can.

Increasingly, Psychological researchers find themselves facing exponentially larger data sets available on the internet (e.g., data mining) and elsewhere without the proper tools to handle them. A huge amount of Psychologists use spreadsheet software (e.g., Excel) for processing data. Often,  this can mean spending hours clicking around in the interface or copying and pasting. With new datasets the procedure will be repeated. Apart from being a huge waste of time, but the reproducibility the work that have previously been carried out suffer hard. It may often be completely impossible to do the exact same procedure again and the work carried out may be useless.

What kind of language should a Psychologist learn? Some programming languages, such as MATLAB and R, are commonly used by psychologists (and other cognitive scientists). These languages focus on mathematical and statistical operations. In MATLAB you are able to do psychological experiments (e.g., using Psych Toolbox), create mathematical and statistical models of cognitive functions or phenomena. Of course you can also code your statistical analysis with MATLAB. R is more focused on statistical computing, as far as I know anyway. I have seen some modelling and neural network packages for R also. However, I am yet to see a package for conducting experiments. Python is another language that has received increasing use lately. It is a general-purpose language, similar as tC and C++. However, it is a interpretative language. That is, it is a scripting language and one of the easiest languages to use. There are some packages for creating psychological experiments, such as PsychoPy,  and a bunch of libraries that can be used for scraping the web of data. Also you can conduct statistical analysis using packages such as NumPy, Pandas, and Statsmodels. In fact, the SciPy stack is very useful.

There are also JavaScript libraries to use to create experiments to run online. Running experiments online seems to be a new phenomena but give you the possibility to collect a huge amount of data in short periods of time. For instance, using Amazon Mechanical Turk seems to be popular in the US.

Programming is fun, try it!

Memory and attention

Introduction

In the environment, we are constantly bombarded with a great range of stimuli. Most of these stimuli are of no relevance to us and we have the ability to successfully filter out most of them. This ability to filter out certain stimuli is referred to as selective attention. Part of my research concerns how some stimuli still breaks through the filter and capture our attention. In this line of work there has been shown individual differences in the ability to successfully filter out irrelevant information. For instance, people with greater working memory capacity has shown to be less distracted by auditory distractors (i.e., one’s own name) than individuals with low working memory capacity (e.g., Conway, Cowan & Bunting, 2001. In this post I will focus on the Working Memory (WM) and its interaction with attention. The basis of this discussion will be chosen parts of a review by Awh, Vogel, and Oh (2006).

Interaction between attention and working memory

It has been suggested that attention is “a gatekeeper that determines which items will occupy the limited workspace within working memory” (Awh et al., 2006). In a paradigm called Attentional blink it in which participants are to identify and report two visual targets rapidly presented following each other. Typically in this paradigm is that processing of the second target is impaired for over 100 ms. Awh, et al. argues that the effect of attentional blink is reflecting a bottleneck in the encoding into WM which will result in the impairment of the second target. This would, in Awh and colleagues view, be a type of goal-driven encoding that reveals one aspect of attentional control. Moreover, it has been found that semantic information in the second target will enter working memory and, therefore, be processed. The integrity of stored representation within WM is further argued be determined by internal shifts in attention.

Furthermore, it has been suggested that covert shifts of spatial attention could facilitate information held in spatial working memory in the same manner that is covertly articulating has been shown to aid maintenance of information within phonological WM. This is called the attention-based rehearsal hypothesis. Moreover, holding an object from the environment will also lead to attentional capture from a subsequent presentation of that object. Both holding spatial locations in WM and spatial attention show overlapping neural substrates. The areas that Awh, et al. points out that there are overlapping areas of the frontal and parietal cortex between spatial WM and spatial attention. Moreover, the lateral intraparietal sulcus (LIP) has been found to be activated by both selective attention and working memory. In monkeys LIP has been found to be activated when remembering locations (e.g., Bisley & Goldberg, 2003).

However, in the Visual Search paradigm, data has been collected that does not follow the attention based hypothesis. In this paradigm, the participants are to hold an object in WM matching a distractor in a search array in half of the trials. If objects in WM would capture attention in an obligatory manner search rates should be slower when the distractor and the object held WM matched. This effect has not been found in many studies. Awh et al., (2006) suggests that objects in memory can indeed capture attention but this effect can be suppressed by other interactions within the WM.

Discussion

That there is an interaction between working memory and attention seems to be quite clear. However, one might wonder in which direction this interaction is taking place. There are other researchers that have focused on working memory capacity (WMC) and attention. It has been found that people with high WMC are less prone to get distracted by the presentation of distracting sounds (Cowan et al., 2001; Sörqvist, 2010). Moreover, it has been found that low people with WMC show greater proactive interference, worse performance in antisaccade tasks, the Stroop task, and more prone to be distracted by the dichotic listening task (see Engle, 2002 for a review).

Furthermore, ADHD, which is an attentional disorder, is related to less activity in the dorsolateral prefrontal cortex (DLPFC), intraparietal sulcus (IPS) and supplementary motor area (SMA) (Konrad & Eickhoff, 2010). Individuals with ADHD also often have low WMC which has led some researchers to suggest that it is the WMC that is one of the problems related to inattention. Furthermore, in a study examining the neural correlates of the executive functions the intraparietal sulcus has been also suggested to be related to amodal selective attention to relevant stimuli and suppression of irrelevant external stimuli (Collette et al., 2005). That is, the intraparietal sulcus seems to correlate with the executive function inhibition as well. Indeed, the executive functions have been found highly correlated to WMC (McCabe et al., 2010). Could the intraparietal sulcus be one of the keys here? To be able to sustain attention you most often need to be able to suppress irrelevant information such as people talking, etc.

In conclusion, there is an obvious interaction between attention and working memory but how this interaction is taking place is somewhat unclear. Without the ability to sustain attention it seems hard to maintain items in working memory but there also seems to be a connection between working memory and the ability to selectively attend. It is maybe here some other control functions such as the executive functions come to play? It seems that there are a plethora of psychological constructs that might share the same underlying brain structures. Personally, I would prefer that researchers united on a few constructs. Finally, more research in this area of interactions between attention and working memory is needed.

References

Awh, E., Vogel, E. K., & Oh, S.-H. (2006). Interactions between attention and working memory. Neuroscience, 139(1), 201–8. doi:10.1016/j.neuroscience.2005.08.023

Collette, F., Van der Linden, M., Laureys, S., Delfiore, G., Degueldre, C., Luxen, A., & Salmon, E. (2005). Exploring the unity and diversity of the neural substrates of executive functioning. Human brain mapping, 25(4), 409–23. doi:10.1002/hbm.20118

Conway, R., Cowan, N., & Bunting, M. F. (2001). The cocktail party phenomenon revisited: the importance of working memory capacity. Psychonomic bulletin & review, 8(2), 331–5. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/11495122

McCabe, D. P., Roediger, H. L., McDaniel, M. a, Balota, D. a, & Hambrick, D. Z. (2010). The relationship between working memory capacity and executive functioning: evidence for a common executive attention construct. Neuropsychology, 24(2), 222–43. doi:10.1037/a0017619

Konrad, K., & Eickhoff, S. B. (2010). Is the ADHD brain wired differently? A review on structural and functional connectivity in attention deficit hyperactivity disorder. Human brain mapping, 31(6), 904-16.

Sörqvist, P. (2010). High working memory capacity attenuates the deviation effect but not the changing-state effect: further support for the duplex-mechanism account of auditory distraction. Memory & cognition, 38(5), 651–8. doi:10.3758/MC.38.5.651