You’ve likely heard the term ‘unconscious bias’ – it’s become an important topic in the area of diversity and inclusion in the workplace, particularly as notable tech companies such as Google and Facebook make headlines in their attempts to address it. Organisations are now increasingly aware of the potential negative effect unconscious bias can have on revenue, productivity and talent – and as a result many are now including unconscious bias training and workshops as part of their HR strategies.
Until recently, the missing piece of the unconscious bias jigsaw has always been what, exactly, can we to advise people to do about it? After all, unconscious bias is by its very definition not consciously detected, and so not easily open to introspection. Research has emerged to show that giving people better cognitive strategies not only reduces unconscious bias, but that bias levels continue to fall after intervention. This work, and a recent review of the wider literature, has given rise to some very practical and research-led ideas on mitigating the effects of unconscious bias.
Here are a selection of things we can all do to help reduce our bias and the effect it can have on your business:
1) Get tested: In order to effectively tackle our unconscious biases, it helps to know which social groups we may have such bias towards or against. Unconscious bias can be measured using an Implicit Association Test, such as Hogrefe’s Implicitly
2) Slow down: Allow time for the conscious brain to engage. Delay making key decisions about people to a time when you are able to give full consideration, and then take the time to challenge the decisions you do make.
3) Avoid emotional triggers: When we’re tired, stressed or have other emotionally-draining work to carry out, we are more susceptible to our biases. Think about how you schedule any work which involves making decisions about people.
4) Get a critical friend: Asking someone to get you to explain or justify your decisions to them will make them fairer (or do it in the mirror – that works too!). If we know that our decisions are unlikely to be challenged, we tend to be more biased.
5) Don’t be afraid to ask: Biases struggle to keep their hold on us when we see people as individuals, so endeavour to learn more about your colleagues. People are rarely offended by being asked a question about their lives and they are more often delighted that you are interested. Make this a two-way street: try to be open to others asking questions about you and your life.
6) Give yourself a break: Don’t beat yourself up about the fact that you have biases. We all have them. Feeling bad (emotional load) can make it more difficult to manage any biases you do have. Give yourself a break and relax.
Organisations can be trained to use Implicitly to identify and address bias in their organisations – and to use those results in a useful and ethical manner, raising awareness and, with time and training, conscious action. You can find more information on Implicitly on the Hogrefe website, including upcoming training dates.
Workplace psychometric tests allow us to assess individuals on areas such as ability, personality and motivation. All our tests, such as our newest, the Leadership Motivation Inventory (LEAMO), have been developed using a complex technical procedure to ensure that they really do measure what it is they claim to assess.
Workplace assessments are mainly used for role selection, personal development or career guidance. They help us to find out a little bit more about someone in a way that is reliable and accurate, and allow us to see differences between people – particularly when using ranking and profiling features, such as those offered by our HTS 5 system.
In general, psychometric tests fall into two main categories:
1. Measures of typical performance
These measures aim to assess how an individual is likely to behave or their typical style of behaving. These can include things such as our interests or our personality. There is no right or wrong answer, as these types of assessments measure what you think. An example of a typical performance question is:
On a scale of 1 to 5, with 1 being ‘strongly disagree’ and 5 being ‘strongly agree’, to what extent do you agree with the following statement?
‘I like most people that I meet’
This type of measure is often referred to as a personality assessment or behavioural test.
2. Measures of maximum performance
Tests of particular abilities or aptitude are known as maximum performance tests. For these types of assessments there are usually right or wrong answers, and you should try to answer the questions correctly. An example of a maximum performance question is:
Question 1: All the houses in Winscombe were built this century or shortly before, but Ferrydale, 20 miles to the north east, has many lovely old houses. Milton is 15 miles north of Ferrydale, with buildings of much the same type. Westwood is a small village south east of Winscombe, with several eighteenth-century cottages.
Which is least likely to have an eighteenth century house?
Your scores from both typical and maximum performance measures are compared to lots of other people who have taken the test, which is called a norm or comparison group. This allows us to see how typical or similar you are to other people. For example, if you scored 30% on an ability test you might think that this is not very good, however if everyone else scores 20% then in actual fact your score is better than most others in your comparison group.
Only individuals trained to British Psychological Society (BPS) Test User Occupational: Ability or Test User Occupational: Personality standards have access to occupational psychometric tests in the UK. When you are asked to take a psychometric test, you can check the name of the person responsible for the testing (known as the ‘test user’), and ask what their qualifications are by contacting the Psychological Testing Centre (PTC). All qualified individuals are held on a record that you can access through the PTC. You can also do some research on the test itself, and check whether it is sold by a reliable provider. Essentially though, the onus is on the person who has asked you to complete the test to act in a responsible, fair and ethical way when using psychometrics. Organisations and individuals using psychometric tests must follow guidelines for data protection (Data Protection Act 1994), and also those set out by the BPS.
When you are asked to take a psychometric test you should be given the following information:
- Why the test is being used
- How the results will be used
- How the tests will be scored and by whom
- What feedback you will receive on your test scores
- Who will have access to the results and how long they will be stored for.
When used appropriately and by suitable qualified individuals, psychometric tests can provide valuable insight to increase self-awareness. It is understandable to feel anxious, worried or even nervous about taking a test. The person who has asked you to complete the test should be able to provide you with reassurance and advice, and use the checklist above to make sure that you get the right information.
The DESSA-Mini is the psychometric of choice for new national character award programme, the Prince William Award
We’re excited to share with you that the Devereux Student Strengths Assessment (DESSA-mini) has been selected as the psychometric of choice for a new national character award programme, the Prince William Award. Education charity SkillForce is delivering the programme to young people across the country and Hogrefe Ltd is delighted to provide them with local access to the measure on behalf of US-based social enterprise Aperture Education. Read the press statement from Aperture Education in full for more on this pioneering new project:
Charlotte N.C. (June 1, 2017) — Aperture Education, a social enterprise focused on social-emotional learning skills, has partnered with the education charity SkillForce to provide data for a new program to build social-emotional skills in students throughout the United Kingdom.
The SkillForce Prince William Award is a character-building program in the UK for children ages 6 to 14. After a successful pilot this year in 37 schools, the program is expanding to additional schools throughout the UK in September, 2017. The program combines practical and reflective learning through classroom-based and outdoor activities to develop character, resilience, compassion, courage, teamwork and problem solving skills. It draws on the expertise and skills of ex-Services personnel who work as SkillForce instructors.
SkillForce selected the Devereux Student Strengths Assessment (DESSA-mini), to measure the program’s effectiveness. The DESSA-mini is a strengths-based assessment that allows educators to screen students for social and emotional learning (SEL) competence. It identifies students that have SEL strengths, as well as others who need additional support, and tracks their progress over time.
“Helping students build strong social and emotional skills sets them up for success not only in academics, but also in life,” said Marc Kirsch, Director of Sales and Business Development for Aperture Education. “Our partnership with like-minded company SkillForce will help to improve the SEL skills of students throughout the UK. We are pleased at the promising results of the pilot and are excited about the upcoming national rollout.”
Interim research findings from the pilot show that teachers at both primary and secondary schools, across all key stages and settings, have noticed substantial changes in students’ behavior, attitude and skills in particular confidence, communication and the ability to work with others.
The program’s expansion was announced in March by Prince William, the Duke of Cambridge, who serves as SkillForce’s Royal Patron. It is supported by investment company Standard Life, and its academic partner is the Jubilee Centre at the University of Birmingham.
The pilot, which ran in schools for one afternoon a week throughout the 2016-17 academic year, covered five themes: personal development, relationships, working, community and environment. It includes levels for each different age group: pioneer (minimum age 6), explorer (minimum age 8), and trailblazer (minimum age 12). In addition to team tasks and practical challenges, students receive feedback from instructors that encourage them to reflect on their actions and experiences, and consider how they would behave differently next time.
“We are thrilled to launch the Prince William Award, a pioneering new program and the first of its kind, which will help children and young people build character, resilience and an inner strength for life. I want to thank The Duke for his fantastic support,” said Ben Slade, Chief Executive of SkillForce. “Character attributes can be developed in children and young people, given the right mentoring. Our ex-Services personnel, who work as instructors in schools, inspire children and young people to dare to be their best selves. Developing personal skills is as valuable as academic study, given that character traits such as courage, cooperation, listening and problem solving can affect academic performance, psychological wellbeing and job success later in life.”
The partnership with SkillForce was developed through Aperture Education’s publisher in the UK, Hogrefe. It is part of Aperture Education’s ongoing work to support students’ social and emotional health in the U.S. and abroad. Aperture Education works with educators, administrators and out-of-school-time providers who are implementing social and emotional learning programs within their schools providing strength-based assessments and resilience-building resources to help address the whole child. Its goal is to ensure members of school and out-of-school time communities, including adults, have the social and emotional skills needed to thrive.
Aperture Education offers products and services to support SEL programs, including Evo Social/Emotional, a K-12 online assessment and intervention tool. It uses the Devereux Student Strengths Assessment (DESSA) to help educators measure students’ SEL skills and implement individualized, classroom and school-wide strategies for instruction and intervention.
For more information about the Prince William Award program, visit http://www.skillforce.org or follow @SkillForceUK. Hashtags: #SkillForcePWA #PrinceWilliamAward #BeYourBest #Character #Resilience.
About Aperture Education
Aperture Education is a social enterprise focused on addressing the whole child. Its social-emotional learning (SEL) solution, Evo Social/Emotional, is based on the Devereux Student Strengths Assessment (DESSA), a standardized, strengths-based measure of critical social and emotional skills such as personal responsibility, self-management, relationship skills and healthy decision-making. The Evo Social/Emotional online platform includes both the DESSA assessment and the DESSA-Mini, a brief, universal screener of social and emotional competence. Evo Social/Emotional also provides strategies to strengthen social and emotional skills. Version 2.0, now available, provides the data needed to help SEL program administrators measure the impact of their programs and to help educators understand students’ SEL needs and strengths. For more information, go to www.ApertureEd.com.
SkillForce is a national education charity that specializes in character and resilience, and puts heroes in schools to transform lives, empowering children and young people to make positive choices. The charity’s dual mission includes supporting ex-Services personnel and their transition into civilian life. SkillForce delivers educational programs that develop character, self-confidence, resilience, teamwork and problem-solving skills. Founded in 2000 and a registered charity since 2004, SkillForce has helped more than 60,000 children and young people. The Duke of Cambridge has been the charity’s Royal Patron since 2009. As the home of character education and the Prince William Award, SkillForce works with more than 200 schools in England, Scotland and Wales. For more information, visit www.skillforce.org. Join us on LinkedIn and Facebook. Call 01623 827651.
This press release was originally posted by Aperture Education here.
It is estimated that one in every four of us will experience a mental health problem at some point in our lives. Chances are, even if we ourselves are not suffering, we all have someone close to us who is. Despite these high prevalence levels, there is very much a stigma over mental health in this country. This stigma is deep-rooted, having somehow become engrained within our national identity – the British have long been characterised as placing importance on keeping a ‘stiff upper lip’. It’s something that we seem to pride ourselves on, without ever really questioning.
This seems to be embedded within the male psyche in particular – studies have found that men are on average far less likely to seek professional help when struggling with a mental health issue, with suicide rates alarmingly high (in fact, suicide is the biggest killer for men under the age of 45 in the UK).
And so the honest and upfront talk about mental health in the media over the last couple of weeks, though long overdue, feels incredibly groundbreaking. Few will have missed Prince Harry’s recent headline-making interview, in which he spoke candidly about his own experiences. He and the other young royals also fronted the recent Heads Together #OKTOSAY campaign, aiming to ‘change the conversation on mental health’ to huge press attention. Far from being just another half-hearted celebrity-fronted campaign, this feels like a genuine turning point, and their decision to talk openly has (quite rightly) received praise from mental health professionals, politicians and the media alike. The way we talk about mental health may finally be changing.
The truth is, we can all benefit from keeping tabs on our mental health just as we would with our physical health. No matter what your level of privilege, we all encounter stressful life events one way or another – and whether they be hugely traumatic events, or far smaller, they can all take their toll psychologically. A wealth of psychological and psychiatric research has shown how stressful life events can have a ‘triggering’ effect for many different disorders.
Of course not everyone who experiences an emotionally-salient negative event will go on to develop problems. Clinical Psychologist and emotional processing expert Dr Roger Baker explains that such events “need to be emotionally absorbed, adapted to and integrated into our experience so that we can get on with the task of daily living.” Over many years of research into this process, Dr Baker and his team discovered that there are both healthy and unhealthy styles of emotional processing: “A resilient emotional processing style means that a person can more effectively deal with stressors when they occur so there are less psychological and physical repercussions. Problematic styles of processing may mean that the stressful event is not properly absorbed or integrated, resulting in physical and psychological symptoms.”
The key thing to recognise is that our emotions are not enemy forces to be suppressed or kept in line, as the longstanding ‘stiff upper lip’ approach would have us believe. Rather, they are entirely natural responses – like a second immune system protecting us and helping us to understand and interpret the world around us.
Developed by Dr Roger Baker and his research team, the Emotional Processing Scale (EPS) is a groundbreaking psychometric tool that examines healthy and unhealthy styles of emotional processing and potential deficits. The EPS has already received high praise from the British Psychological Society for its pioneering approach to mental health. Discover the EPS at here.
As Brain Injury Awareness Month drew to a close it was fitting that we caught up with Dr Gerald (‘Jerry’) Burgess, author of our test battery for the assessment of acquired brain injury and other neurological disorders, the SPANS. We sat down to discuss the test’s development and utility, the upcoming norms extension project, and current research studies utilising the measure. You can read the interview in full below:
You developed the SPANS ‘on the job’ – what led you to design the test?
It was a new job for me, working as a clinical psychologist across two acquired brain injury (ABI) neurorehabilitation inpatient wards in Leicester. I was referred patients who needed first of all to understand what consequence(s) they suffered from an ABI, to assess cognitive skills and understand their new profile of strengths and weaknesses. This information was needed for the patients and their families to understand and so the neurorehabilitation team could advise them, and also in order to devise a rehabilitation plan, make placement decisions, understand the recovery trajectory and predicted level of independence we may expect, and often to make a clinical judgment on their mental capacity to make certain types of decisions.
I wished to be thorough in my assessments, and cover the multitude of abilities I was discovering that can go wrong in a frighteningly wide range of patient’ ABI’s. There was not in existence a comprehensive, yet brief, measure with norms to match my patients’ age range, which was late adolescence to old age. Before I developed the SPANS, in order to get an equally comprehensive battery, I had to borrow subtests from different batteries, with the associated problems of comparing subtest performances that used different metrics and norm groups. I started designing the SPANS because I needed a comprehensive but brief assessment with subtests co-normed side-by-side with each other.
What sort of information does the SPANS provide?
The SPANS now has 30 brief subtests that spread over seven index scores, namely, orientation, attention/concentration, language, memory/learning, visuo-motor performance, efficiency, and conceptual flexibility. Each of these indexes contains two-to-eight subtests. The reason for so many subtests is that when there are multiple subtests measuring the same cognitive skill, one gets more reliable and trustworthy scores in that index score, that it raises the confidence that the label of the index score is intended to measure is indeed what gets measured.
Also there are neuro-anatomically different skills that are independent, or doubly-dissociate, that come under the same broad umbrella label. For example, ‘language’ is multi-faceted and includes expression, naming, comprehension, repetition, reading, and writing. All must be assessed for a thorough ‘language’ assessment to have taken place, or for any one cognitive domain for that matter. Thus as in this example, the SPANS screens many independent skills inherent within its subtests, and then cumulatively produces a very reliable index score in the broader cognitive domains. As a result, it is also possible to screen for neurological syndromes using the SPANS, including aphasia and type, unilateral neglect and spatial impairment, object agnosia, agraphia, acalculia, alexia/dyslexia, and apraxia.
How does this compare to other assessments intended for acquired brain injury?
The SPANS is the only test that considered a wide range of ABI’s and after effects in its design features and item selection, and therefore is like no other. Also, in many, many cases, I see no reason not to make the SPANS the assessment-of-choice when screening or doing a brief sensitive and specific comprehensive assessment with someone with ABI. The SPANS offers the most reliable index scores due to what I explained earlier, but also the most reliable re-test available with its alternative version, and the best test-retest reliability coefficients in its class of tests. The SPANS has more subtests than any other test in its class, and more, and more reliable index scores. The SPANS thus provides a lot of opportunities for norm-referenced observations of behaviour, but in administrating and scoring combined, it does not take longer than other tests in its class. One reason for its comprehensiveness, yet brevity, is that the SPANS was designed to be sensitive and specific with people with normal/average IQ – meaning that at a high percentage it should discriminate between those with no presence or history of ABI or neurological condition to those with even very mild-to-moderate impairment from some condition that affects cognitive abilities. ROC analysis showed that the SPANS is very successful with this in all of its index scores, particularly visuo-motor performance and efficiency. The SPANS was designed to not have to do more testing than is necessary on any one skill or subtest, and still provide good, accurate information if a particular skill or broader domain is impaired or not, and a means to interpret this in real depth. Some tests in this class over-test, require that the patient do more in regard to a single skill or subtest, but to come to the same finding or conclusion. With this, the SPANS can assess more skills but in an equal amount of time – and the patients tend to enjoy it and participate better too as a consequence of not being over-worked, and doing a variety of tasks.
Can it be used in areas beyond acquired brain injury?
It depends on one’s point of view, of which mine is that yes, the SPANS can be used for any neuropsychological assessment if the age of the individual being assessed coincides with the norms range the SPANS possesses. The opposing opinion is that a test should not be used with people with particular clinical/neurological conditions that have not been in sufficient mass assessed using the tool in question, and these data are available, reported to clinicians.
My view is that the neuropsychological profile of particular conditions is well known, and thus with a measure designed like the SPANS is, to assess neuro-anatomically-dictated cognitive, perceptual, and language skills by conventional means, and to do so using many sensitive and specific subtests and reliable index scores, the purpose of any neuropsychological assessment can be achieved. What was useful about clinically designing and norming the SPANS on ABI is that ABI is non-discriminatory – affects right and left hemispheres, includes primary direct damage and secondary effects, every lobe and mid-brain and brain stem, grey matter and white matter, diffuse or focal. It is then up to the clinician, who may use the SPANS manual which provides age-referenced norms and in-depth empirical and theoretical interpretative information on each subtest and index score to apply clinical knowledge, experience, and judgment to conduct a thorough assessment.
The SPANS is a flexible tool – could you explain how the test can be adapted to suit the needs of both the patient and the clinician?
As I’ve just mentioned, the SPANS is norm-referenced to the sub-test level, so any combination of subtests can be administered and interpreted. The SPANS manual suggests 5-minute and 15-minute screening tests to simply detect the presence or absence of cognitive impairment, which are comprised of the most sensitive and specific subtests. One version of this does not even require the use of the stimulus book, but in this case the patient must not be aphasic or have language impairment. If the patient is language impaired, or visual or motor impaired, or only has the faculty of “yes” or “no” responding, the SPANS manual recommends the most useful course to design an individualised assessment that covers the domains of orientation, attention/concentration, memory/learning, conceptual flexibility, and any aspect of visuo-motor performance or language that may be possible.
You have a normative study for older adults in the pipeline – please could you tell us about that?
We are collecting about 220, mostly healthy control, older adult norms to add to our existing database of individuals above the age of 75. When this is complete, the SPANS will be re-launched and provide norms from which to interpret performances of older adults aged 75 to 89, and include findings from initial studies examining the validity of using the SPANS in this age range specifically, and for differential diagnosis of dementias generally. The validation studies will involve detailed understanding of our participants’ demographic and medical histories, ability for independent living, and co-administration of the SPANS with gold standard measures, known to be very sensitive particularly to Alzheimer’s, semantic, and multi-infarct or vascular dementias. We will examine the most useful norm stratifications in these upper age ranges, clinically, and whether the SPANS co-varies and adds more to an assessment than using traditional gold standard tests.
Are there other studies that you know of currently using the SPANS as a measure?
There are many studies under way at the moment, some with manuscripts near-ready for submission to an academic journal for peer-review, and then hopefully publication. These studies either involve examinations of the validity of using the SPANS with particular groups, or the SPANS being used to detect changes in cognition following some kind of intervention, or as a means of describing prevalence of cognitive impairment. So in addition to the older adult study under way described earlier, other group studies include English as a second language, non-Western cultural influences, children/adolescents, and learning disabilities. Further psychometrics have been carried out since the publication of the SPANS manual, and these will be published, including ROC and exploratory factor analysis. There is a study that compares the SPANS head-to-head with the RBANS for sensitivity and utility in ABI neuro-rehabilitation. The SPANS will also be used to describe the cognitive profile of long-term incarcerated individuals, and as a pre- / post- measure of vitamin treatment for individuals with poor nutrition and high alcohol intake, often homeless individuals.
Thank you Jerry!
You can find further information on the SPANS assessment here. If you would be interested in being involved in the older adults norming study Jerry mentioned, you can express your interest by emailing firstname.lastname@example.org.
Based on our leading personality assessment, NEO-PI-3, NEO-FFI-3 provides a briefer snapshot of personality, cutting the the self-report assessment down to a targeted 12 questions in each of the five domains.
Professionals using the FFI-3 appreciate its quick, accurate measure when time is limited and the full-length NEO-PI-3 cannot be deployed. Take a look at a sample NEO-FFI-3 UK technical report. For more on FFI-3 features and pricing, visit our NEO-FFI-3 webpage.
Also in the NEO Family:
- NEO-PI-3: Our flagship assessment is even more powerful in its newest UK edition – and now includes both international managerial and professional norm groups.
- NEO Cards: For use in assessment feedback sessions, add a powerful new dimension to your conversations with NEO Cards.
- NEO Primary Colours® Leadership report: Combines personality with leadership adbility for a powerful assessment used in selection and development.
Assessing leadership judgement and implicit bias: Hogrefe workshops at the BPS Division of Occupational Psychology Conference
Hogrefe is a long-time supporter of and participator in the British Psychological Society’s Division of Occupational Psychology Conference, this year held 4-6 January 2017 in Liverpool. We were delighted to hold accreditation sessions for two our popular assessments: LJI and Implicitly.
Leadership Judgement Indicator Workshop
Leadership Judgement Indicator (LJI) authors Michael Locke and Bob Wheeler were pleased to lead Hogrefe’s LJI accreditation workshop at the DOP, welcoming 9 newly accredited test users from universities, public sector, private business and the BPS itself.
The LJI (currently in its second edition, LJI-2) measures accuracy of judgement when dealing with leadership situations. It includes an assessment of the degree to which the leader can flex away from his or her preferred style to the most appropriate one for the particular situation.
As delegates learned, the principles upon which the LJI-2 is based lend themselves to a development technique that has proven effectiveness. The LJI-2 has strong psychometric properties with an internal consistency. Criterion-related validity is demonstrated by a positive correlation between test scores and level of management seniority, and the 16 scenarios have a high degree of face validity.
From our LJI delegates:
‘I would just like to say thank you both for the effort and expertise that you brought to the workshop. It was really informative, enjoyable and very well presented. I have learned so much from your training that will assist me in the future and I hope to use the instrument soon.’
‘Thank you so much for giving up your time to deliver the training on Friday afternoon. I found it really interesting and I’m looking forward to getting on and using the tool.’
Implicit Bias Workshop
Eloise Warrilow, Hogrefe Consultant Psychologist, and Implicitly author Dr Pete Jones were pleased to welcome 22 delegates to Hogrefe’s Implicitly workshop. Implicitly is the first commercial implicit association test to measure the risk of discriminatory behaviour.
Already full to the stated capacity of 15 delegates, additional gatecrashers were also made to feel very welcome. Given this workshop fell at the end of a busy three-day conference, the tutors were delighted with the delegates’ commitment and enthusiasm. The delegates represented the diversity of the occupational psychology world, from Fire and Rescue Service, Police, British Armed Forces, the DWP, Airlines, analytics and data management and also independent consultants.
Delegates discussed implicit bias facts, such as how we all have some level of unconscious bias which arises from our upbringing, environment and experiences and that these biases can influence how we interact with others. As they learned, once we have an awareness of our biases we can predict when our biases are likely to be triggered. When awareness is combined with a motivation to change, we are able to implement steps to moderate biases and challenge negative associations through strategies such as imagined positive contact, remembering good examples and exceptions, widening social networks and perspective taking. During the workshop, tutors also discussed the flexibility of using Implicitly and how it can be ethically used in:
- Individual development
- Team development
- Inspection and audit
Hogrefe is thrilled to have an additional 22 qualified implicitly users who are able to make a positive impact on diversity and inclusion issues within organisations.
From our delegates:
‘I took a lot from the Implicitly workshop. It was great way to finish the conference!’
‘Keep up your good work. The DOP 17 workshop exceeded my expectations.’
‘Thanks for letting me join in the Implicity workshop. I think I found it the most interesting event of the conference.’
With our next ADI-R Administration and Coding course coming up in April, we took the opportunity to sit down with the course trainer, Patricia Rios, to hear more about her years of experience working with those with Autism Spectrum Disorders, and find out what delegates can expect from the upcoming course. You can read the interview in full here below:
Tell us a little about your day-to-day work as a Consultant Clinical Psychologist.
Working within a multidisciplinary team is hugely important to me. Having access to colleagues from different disciplines allows us all to think about a child’s / young person / adult’s presentation from different perspectives. Similarly, understanding and assessing our patients’ strengths and difficulties in several areas of functioning allows us to make an informed decision with regards to diagnosis.
As a consultant clinical psychologist I must ensure that every day clinical practice is informed by the latest research findings. Evidence-based practice within a MDT is the responsible and most effective way to help our patients in the process of ameliorating their difficulties.
Have you always had an interest in Autism Spectrum Disorders?
Yes, I have always been curious about ASD and its effects on the person presenting with it as well as their family. When I joined Professor Rutter’s research team at the Institute of Psychiatry in 1984 I felt so proud. I joined a team of scientists to investigate the genetics of autism. We carried out a collaborative research project based in the UK, USA and Canada and systematically began to investigate this complex behavioural presentation called autism.
I see that you were involved in the development of the ADI. How did that come about and how was that experience?
The family genetic study of autism involved several aims, one of them being to design and develop a systematic way of gathering developmental information about a child presenting with developmental delay and deviance. We set out to define the questions that needed to be asked of the parents for us – scientists and clinicians – to be able to decide if the behaviours described were congruent with a diagnosis of autism. This involved doing some very specific and innovative statistical analyses that identified those questions that were most helpful at targeting and identifying these behaviours.
What sort of information does the ADI-R provide?
The ADI is a standardised clinician based instrument/ interview that assist with helping the parent in recalling the child’s early history. It sets the scene for the parents and then systematically goes about asking questions that allow the clinician/scientist to make a judgment as to the quality of the behaviour described by the parent.
What can people expect from the upcoming ADI-R training courses you are running? What can delegates expect to take away from the course?
Delegates will be introduced to a standardised and systematic way of asking questions to elicit behaviours in children. They will learn about the importance of adhering to the probes in the ADI-R booklet without making assumptions, rather asking the parent to provide examples of behaviours that will in turn allow the delegate to rate. The beauty of the ADI-R is that it provides a space for the parent to recollect information about her child; with careful and patient guidance parents do remember their child’s development. Asking for examples is crucial as they provide the evidence the clinician requires to distinguish a normally developing trajectory in a child’s development or a deviance from it.
You can find more information and book onto the upcoming ADI-R course here.
Hogrefe was thrilled to welcome training delegates to the first Griffiths III new user course being hosted at Hogrefe House from 31 January – 1 February. Delegates, a mix of psychologists and paediatricians, enjoyed three days of training to enable them to use the child development assessment within their day-to-day practice.
Led by experienced trainers Dr Paula McAlinden and Dr Fiona Corr of ARICD, the course began by allowing delegates to familiarise themselves with the kit’s equipment, manuals and forms – before going through each of the test items that make up the five Griffiths III subscales.
A highlight of the course was a series of live administration sessions with local children. With assistance from the course leaders, these sessions provided trainees with practical, hands-on experience of administering and scoring the test. The children were a variety of ages ranging from just 12 months to almost 6 years – allowing the full range of test items to be utilised, and thus illustrating the breadth of the assessment. The children themselves were clearly motivated by the assessment tasks, and appeared at ease and relaxed by equipment that was engaging but also often felt familiar to them.
In all, we hope that delegates had an enjoyable training experience at Hogrefe House. Together with ARICD, we would like to extend our thanks to parents and their children for their important contribution. We look forward to welcoming our next group of Griffiths III new user trainees to the course in April. If you would like to join us on an upcoming training session, either in Oxford or in other locations across the country, please email email@example.com.
If you would like to train in the Griffiths III and have not used the Griffiths Scales before, you can get started now! Managed by ARICD, new user training is in two stages. First, there is an e-module to complete, information for which can be found here. This is then followed by a three-day training course. Dates and locations can be found on the ARICD website, with more to be added for 2017. If you have any questions about the training or how to book onto a course you can contact ARICD at firstname.lastname@example.org.
TEA Ediciones (Spain) and the publishing division of CEGOC (Portugal), the leading publishers of psychometric tests in their respective markets, are joining the Hogrefe Group. Hogrefe has acquired both publishing companies from their previous owner, the CEGOS Group, with effect from 1 January 2017.
The core field of the science publisher Hogrefe is psychology. Along with an extensive range of books and journals, Hogrefe also publishes a large number of psychological tests that are used in a wide variety of fields, such as clinical assessment, human resources, and education. Hogrefe has long been market leader in this sector for the German-speaking countries and is now also the largest provider of psychological tests in many other European countries.
TEA Ediciones, founded 60 years ago, is the leading Spanish developer and publisher of psychological tests. TEA’s catalogue includes more than 400 different products covering all areas of psychological assessment. TEA also publishes therapy materials and books. Other activities of TEA include services related to psychological assessment, such as scoring, reporting and testing on behalf of customers. TEA Ediciones has also developed a powerful and innovative online system for administering and scoring psychological tests and generating reports. In addition to its headquarters in Madrid, TEA has offices in Barcelona, Bilbao, Sevilla and Zaragoza.
More than 30 years ago the CEGOS Group also set up a subsidiary in Portugal, in this case named CEGOC, to publish psychological assessment tools. The market position of the test publishing division of CEGOC in Portugal is similar to that of TEA in Spain: it too is the leading developer and publisher of psychometric tests for psychologists and other professionals in its market.
CEGOC and TEA Ediciones were until now part of the internationally operating CEGOS Group, the leading European provider for professional education and training, staff development and recruitment. The CEGOS Group is divesting itself of its publishing activities in order to focus fully on its core business. The publishing business of CEGOC in Portugal will from now on be run as Editora Hogrefe, Lda. (Lisbon).
All 50 members of staff at TEA in Spain and at CEGOC’s publishing division in Portugal, the majority of whom are psychologists, will continue to be employed by Hogrefe. Both publishing companies have well established networks of international distributors outside of their home markets. The Spanish-speaking areas of South and Central America are particularly important export markets for TEA, while for CEGOC Angola and Mozambique are currently the most significant overseas markets. Hogrefe is, however, expecting significant synergies to arise from collaboration between its new Portuguese member and Editora Hogrefe CETEPP (Centro Editor de Testes e Pesquisas em Psicologia) in São Paulo, Brazil, which has been a member of the Hogrefe Group since 2015.
Dr. G.-Jürgen Hogrefe (Publisher and CEO of Hogrefe Publishing Group, Göttingen): “The addition TEA’s and CEGOC’s publishing activities to the Hogrefe Group portfolio means that we can now confidently regard ourselves as the no. 1 test publisher in Europe. With this acquisition we have created an excellent basis for future developments in this area. For many years we have maintained close and friendly business relationships with both TEA and CEGOC and their respective management teams. I am especially pleased that we will now have the opportunity to continue and intensify our work together in the coming years.”
Milagros Anton (General Manager of TEA Ediciones): “During the 60 years of our existence, with our high-quality test publications we have attained a leading position and an excellent reputation in the Spanish market. We are a strong and healthy company. Hogrefe and TEA have for a long time had a great deal in common: we share the same values and business culture, the same desire for high scientific quality of our publications – and many friendships. Joining with Hogrefe means that we are now part of an international publishing group that is specialised in our topics. This offers us attractive prospects for the future. Together we are even better placed to master the challenges that globalisation and digitalisation will bring in the future, for the benefit of both our authors and of the people who use our tests.”
For more, please visit hogrefe.com.