I’m not a discourse analyst, but …

Dr. Dennis Nigbur, Senior Lecturer at CCCU identifies some of the hidden social psychology of the Brexit debate, and how the result may have given rise to a new and more positive stance towards Europe taken by some of the British people.

My analysis of how the UK became Brexitarian

“This time the men of Western Europe have not lacked courage and have not acted too late. They have done something great and what is remarkable and perhaps unique is that they have done it by renouncing any use of force, any constraint, any threat.”
(Paul-Henry Spaak on the signing of the Treaties of Rome)

By the time you read this, the British Prime Minister will have pressed the big red Brexit button, and the negotiations to take the United Kingdom out of the European Union will be about to begin. And there will be a fair few people wondering how we got to this point. The 2nd February White Paper “The United Kingdom’s exit from and partnership with the EU” says that the national parliament has in fact remained sovereign throughout the UK’s membership of the EU, although “it has not always felt like that”. On the one hand, this – belatedly – dismantles one of the key elements of the Leave campaign: the “taking back control” of a sovereignty that was never lost. On the other hand, it shows the importance of psychology in these political events: Perceptions, prejudices, rhetoric, experiences, and fears operate in places that the cold light of day just can’t reach.

I’m no discourse analyst. But a look at the public debate around the 23rd June 2016 referendum can be instructive about the social-psychological elements that played a substantial role without ever being taken seriously by the media coverage (see here also). By pointing at the psychological relevance of what was said and what remained unsaid during this debate, I hope to show how the Brexit vote succeeded but – arguably in a paradoxical case of too little, too late – finally gave rise to something new: an outspoken identification with Europe and a positive attitude towards the EU among large parts of the British public.

Having your cake and eating it

Jeremy Corbyn, leader of the Labour party, has criticised the British government’s unclear position about the consequences of the Brexit, saying that the idea to leave the European Union and its single market, but retain continued access to it, seemed like a case of “having your cake and eating it”. Having your cake and eating it is, of course, the strategy that the UK has adopted throughout the process leading up to the referendum. George Osborne talked about having “the best of both worlds”, the benefits without the burden, while David Cameron tried to negotiate a “better deal” as if the UK were about to change its gas provider.

Although both Cameron and Osborne supported the Remain campaign, they had thus already played into the hands of the Leave camp by focusing the debate on British self-interest and a perceived threat to national sovereignty and self-determination. Did you notice how rare continental European perspectives were in this debate, and how both Leave and Remain were fixated on what they believed was best for Britain, that island in the North Sea (see my earlier blog)? By asking for greater national control over immigration, restrictions on welfare payments to migrants, and opt-outs from ever-closer union and the cost of bailouts in the eurozone, the government of the time served and strengthened existing stereotypes. The image of out-of-control immigration due to European legislation, conflated with the refugee crisis, was later exploited by the infamous “breaking point” poster campaign. The image of the EU as an unstoppable Moloch demanding ever-greater sacrifices of wealth and freedom had already been well established in the campaigning of organisations such as the UK Independence Party, and was referenced quite clearly by Nigel Farage in his “victory” speech at the European Parliament following the referendum.

A peculiarly British flavour of talk about Europe

Cameron and Osborne, obviously, were not the first or the only people to use these stereotypes. It was precisely because these representations were already common currency that they could be linked so readily to the government’s efforts to secure a “better deal”, and mobilised so easily alongside some independence rhetoric and fear campaigning. I believe that these were successful because of the long-established British tradition of regarding the EU as a glorified trade agreement at best and a menace to national sovereignty at worst. Other Europeans seem to be much happier to say “Europe is not a market; it is the will to live together” (watch a powerful extract of a speech by Esteban González Pons here). But public debate in the UK tends to ignore the social, cultural and peacekeeping functions of the EU (see here), leaving the complex economic benefits of membership as the only argument with which the Remain campaign thought it could persuade voters. This was not just unromantic in comparison with the heroics of “taking back control”; it had already been undermined by the implication that the UK was not getting a good deal under existing arrangements, and continued to be undermined by the Leave campaign’s fallacious but effective claims about the cost of EU membership.

Although the economic argument would therefore have been strong and relevant in the view of those who regard the EU primarily as a trade partnership, it was weak in the eyes of those for whom emotions and identity played a greater part. Some commentators later wrote disparagingly about the role of emotional, as opposed to rational, decision-making in the referendum. But that’s just part of the story. Psychologists know that emotions are not a weakness or a defective mode of decision-making, but a necessary and powerful influence on everybody’s life – including voting in the referendum. The problem for Remain was not that people voted with their emotions, but that the emotional case in favour of EU membership was never made. Sure, the scariness of uncertainty was pointed out by Remainers with potentially counter-productive monotony. But positive feelings of solidarity with European neighbours, togetherness and a common identity, or a shared vision for a peaceful European future were simply absent from the Remain campaign … because they had never been part of British public discourse about Europe in the first place. One could say that Remain ultimately had fewer ingredients for a tasty campaign, and ended up rather bland.

A bitter after-taste

This was in contrast to the Leave campaign, with its bold aroma of freedom fighting, the spice of rebellion against the establishment, and the distasteful but dramatic references to a foreign invasion threatening a British way of life that was never defined. Britishness is notoriously difficult to grasp, and is instead often circumscribed in terms of its supposed difference to other collectivities including continental Europe (see, for example, Abell et al., 2006; Condor, 2000, 2006). Leaving the EU thus became associated with asserting and defending British uniqueness, regardless of what constitutes that uniqueness. The recent proposal to the UKIP conference to cut VAT on fish and chips after Brexit is a scurrilous but fitting example.

Many of the consequences of the referendum campaign so far have been dire. Racism seems to have increased, parts of the media continue to stir up perceptions of threat, the pound has weakened (note the dip in the chart here), the United Kingdom is under internal strain, relations with the EU are frosty, mistrust and attempted underhand deals are continuing to poison relationships and, despite repeated references to a clear mandate arising from the referendum, the social reality is that Britain is pretty much split down the middle.

This should, of course, not be surprising. The referendum results show clearly that the absolute numbers are beyond any margin of error, but the percentages are really quite close. With such an important issue, there seems to be no reasonable argument for why the 16,141,241 people who voted Remain should now simply agree with the 17,410,742 who voted Leave. When even sports clubs require a qualified majority, such as two thirds, to make changes to their rules, isn’t it ridiculous that such a momentous decision as the UK’s relationship with the EU should be carried by 51.9% of those who voted? Presumably, a supermajority was not required because the referendum could not be legally binding – but irresistible political pressure had been created by promising voters that the government would implement their decision.

Media coverage has focused on the (economic) uncertainty created by the decision to leave the EU (e.g. here). But uncertainty, by its nature, is temporary. What may turn out to be more damaging is the (political) certainty that the UK doesn’t believe in the European project enough to stay committed.

A new recipe

Ironically, the Brexit vote has also created an unprecedented display of enthusiasm for Europe among those who don’t identify with the UK’s official position. It may be too late to change the outcome of the referendum, but a portion of the British electorate – predominantly the younger and more educated voters who tended to support Remain according to a demographic breakdown of voting patterns (seen here and here) – is finally displaying a kind of public support for Europe that goes beyond economic pragmatism. The “March for Europe” during the immediate aftermath of the referendum and the recent “Unite for Europe” mass demonstrations are perhaps the most visible way in which this new attitude finds expression. But there are also several social media campaigns (e.g. www.neweuropeans.net, scientistsforeu.uk, or www.the3million.org.uk), as well as a new weekly newspaper (www.theneweuropean.co.uk).

As a European immigrant, I have noticed euroscepticism (perhaps too mild a word for what happened in the build-up to the referendum) and anti-immigrant rhetoric grow during the 20 years I have spent here. As a university lecturer, I take heart from seeing the young and educated as the driving force in the reversal of this process. The recent staff conference about our church foundation identity here at Canterbury Christ Church gave me a very welcome reminder that – even in this strange world of employability drives, league tables, and “excellence frameworks” – education is about much more than getting a degree or qualifying for a job. It’s about growing in wisdom and personality; acquiring the ability to research, understand, and evaluate; and becoming a better person: the sort of person to whom we can entrust the future of the UK and Europe.


Abell, J., Condor, S., & Stevenson, C. (2006). “We are an island”: Geographical imagery in accounts of citizenship, civil society, and national identity in Scotland and in England. Political Psychology, 27(2), 207–226. https://doi.org/10.1111/j.1467-9221.2006.00003.x

Condor, S. (2000). Pride and prejudice: Identity management in English people’s talk about “this country.” Discourse & Society, 11(2), 175–205. https://doi.org/10.1177/0957926500011002003

Condor, S. (2006). Temporality and collectivity: Diversity, history and the rhetorical construction of national entitativity. British Journal of Social Psychology, 45(4), 657–682. https://doi.org/10.1348/014466605X82341

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

The social psychology of “10 German bombers”: Why a tasteless football chant about the Battle of Britain is more offensive to the English than the Germans

Dr. Dennis Nigbur, Senior Lecturer at CCCU, takes the opportunity to discuss the interplay between two of his favourite topics, namely football and national identity.

They’ve been at it again: English football supporters have treated the world to yet another rendition of “10 German bombers”, and there appears to be a sense of wonder why the Germans didn’t take more offence (Herbert, 2017).

The simple answer is that they don’t feel targeted by the song. They will recognise it as a chant in poor taste (most Germans speak good English after all – we have an education system that takes languages seriously), but there’s no reason to feel personally or collectively offended by it. Not only does today’s Federal Republic have few similarities with the Nazi state of 75-odd years ago in terms of territory, politics, population, and role on the world stage; the national identity of today’s Germans is also decoupled from the past in a way that the label “German” fails to capture.

I’ve been lucky enough to have erudite social identity scholars as lecturers, supervisors, examiners and bosses: Rupert Brown, Susan Condor, Marco Cinnirella and Evanthia Lyons (and others whose work I have read but with whom I haven’t worked as closely) have all, in their own and often quite different ways, encouraged me to look at social identity beyond the “social identity theory” (Tajfel & Turner, 1979) usually cited as the principal source about the topic. I now encourage my students to do the same. Social identity is about much more than group membership and ingroup bias, and an awareness of the wider literature will help in understanding why “10 German bombers” is in fact more problematic for the English than for the Germans.

All students of social psychology know that social identity has to do with group identification and intergroup comparisons. Temporal comparisons – obviously, comparisons across time points rather than between groups – are an old idea (Albert, 1977) but have only emerged in research on social identity in the past 20 years or so. Amélie Mummendey’s work (Mummendey, Klink, & Brown, 2001; Mummendey & Simon, 1997) is especially relevant here, since it shows how today’s Germans differentiate themselves from the Germans of the Nazi era and derive a positive sense of self from this comparison – an effect more commonly attributed to intergroup comparison.

Admittedly I’m extrapolating from these findings here, but I think I have reason to do so: If the Germans of today see themselves as fundamentally different from the Germans under the Nazi regime and also feel good about that, then why should they feel provoked by a football chant about RAF pilots shooting down German bombers during the Nazi era? The song is obviously in poor taste and intended to offend – but it fails to do so, because of the false assumption that German football fans in 2017 should identify with German bomber pilots in 1940.

Of course, that’s not the whole reason. One other aspect is that, precisely because of historical trauma and self-conscious differentiation from its Nazi past, Germany has a well-documented disturbed relationship with patriotism and national pride, which may make Germans less sensitive to insults directed at their nationality than, say, an English fan would be – the “reformed alcoholic avoiding the wine cellar” (Weidenfeld, 2002, my translation). Second, what may cause offence in everyday life or the proverbial opera house is not governed by the same rules in the milieu of a football match (see Cialdini et al., 1976; Ropeik, 2011): The mismatch between what people are told to do by civil society and what people see as normal practice in the stadium is a good example of the distinction between injunctive and descriptive social norms (Cialdini, Reno, & Kallgren, 1990). (There are chants that I, personally, refuse to sing regardless of the setting; but as a Schalke fan I may have been occasionally complicit in questioning the parentage of our unspeakable black-and-yellow local rivals.)

So why is the England supporters’ chant a newsworthy problem? As The Independent article (Herbert, 2017) suggests, it may be less about the Germans taking offence than about the English being embarrassed. Norms and identity are, again, the central concepts here: By using expressions such as “dragged through the mud”, “the behaviour of scum” or “embarrassment to be English”, the author doesn’t just signal disapproval. He also makes clear that the behavioural norms that should, in his view, be associated with being English are not compatible with the England fans’ actions. Referring to the wartime soldiers of the song, he asks “What would they think to see these people now?” As a wealth of social-psychological research on subjective group dynamics (Marques, Yzerbyt, & Leyens, 1988; Pinto, Marques, Levine, & Abrams, 2010) now shows, the greatest rejection is reserved not for outgroup members, but for ingroup members who break the norms and let their side down. Again, social identity and the world – including the world of football – are more complex than simple ingroup bias.

Key References

Albert, S. (1977). Temporal comparison theory. Psychological Review, 84(6), 485–503. https://doi.org/10.1037/0033-295X.84.6.485

Cialdini, R. B., Borden, R. J., Thorne, A., Walker, M. R., Freeman, S., & Sloan, L. R. (1976). Basking in reflected glory: Three (football) field studies. Journal of Personality and Social Psychology, 34(3), 366–375. https://doi.org/10.1037/0022-3514.34.3.366

Cialdini, R. B., Reno, R. R., & Kallgren, C. A. (1990). A focus theory of normative conduct: Recycling the concept of norms to reduce littering in public places. Journal of Personality and Social Psychology, 58(6), 1015–1026. https://doi.org/10.1037/0022-3514.58.6.1015

Herbert, I. (2017, March 23). English football dragged through the mud again by the braying, beer-fuelled scum who sing anti-German war songs. The Independent. Retrieved from http://www.independent.co.uk/sport/football/international/england-fans-10-german-bombers-braying-beer-fuelled-scum-songs-dragged-through-the-mud-a7645321.html

Marques, J. M., Yzerbyt, V. Y., & Leyens, J.-P. (1988). The “Black Sheep Effect”: Extremity of judgments towards ingroup members as a function of group identification. European Journal of Social Psychology, 18(1), 1–16. https://doi.org/10.1002/ejsp.2420180102

Mummendey, A., Klink, A., & Brown, R. (2001). Nationalism and patriotism: National identification and out-group rejection. British Journal of Social Psychology, 40(2), 159–172. https://doi.org/10.1348/014466601164740

Mummendey, A., & Simon, B. (1997). Nationale Identifikation und die Abwertung von Fremdgruppen. In A. Mummendey & B. Simon (Eds.), Identität und Verschiedenheit: Zur Sozialpsychologie der Identität in komplexen Gesellschaften (pp. 175–193). Göttingen, Germany: Huber.

Pinto, I. R., Marques, J. M., Levine, J. M., & Abrams, D. (2010). Membership status and subjective group dynamics: Who triggers the black sheep effect? Journal of Personality and Social Psychology, 99(1), 107–119. https://doi.org/10.1037/a0018187

Ropeik, D. (2011, October 13). The tribal roots of team spirit. Retrieved March 24, 2017, from http://www.psychologytoday.com/blog/how-risky-is-it-really/201110/the-tribal-roots-team-spirit

Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. In W. G. Austin & S. Worchel (Eds.), The social psychology of intergroup relations (pp. 33–47). Monterey, CA: Brooks/Cole.

Weidenfeld, G. (2002, September 23). Deutschlands neuer Weg. Die Welt. Retrieved from https://www.welt.de/print-welt/article412728/Deutschlands-neuer-Weg.html

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

A relational vision of human distress

Dr. Joe Hinds, Senior Lecturer at CCCU and practising integrative psychotherapist, discusses the distinction between mental illness and eccentricities, and the appropriateness of mental disorder diagnoses on atypical behaviours.

In 1961 the psychiatrist Thomas Szasz suggested that the concept of mental illness as conceived by the medical profession was vague and unsatisfactory, comparing the assessment of mental disorders with astrology. Indeed, determining the sane from the insane has never been straightforward (Rosenhan, 1973).

Munch’s “The Scream”

Now as then, there is a reliance on prescribed pharmaceuticals to medicate and ‘cure’ people who display out of the ordinary behaviours and beliefs. Where psychiatry once governed, contemporary approaches largely based on psychological rather than psychotherapeutic understandings reinforce the pathologising of experiences. For instance, according to interpretation of the diagnostic criteria of DSM (Diagnostic and Statistical Manual of Mental Disorders), where there was once shyness there is now social anxiety, high-spirited children now have ADHD and where we might observe people with strident differences of opinion we now see them labelled as having conduct disorder. Alongside some appeals against the overtly positive culture in which we are encouraged to participate (“Smile or die” – Ehrenreich, 2009), others have suggested that by strict adherence to the criteria of determining disorders in the DSM (and elsewhere) the concept of happiness is worthy of inclusion as a disorder also.

As a relational psychotherapist, I embrace the simple but important and sometimes difficult arts of listening, empathy, consistency, stillness, and being me in an intimate and emotional therapeutic encounter with another. Interpersonal dynamics, particularly in early life, have much to contribute towards our adult functioning and therefore the role of therapy is to provide a safe and understanding relational space in order to re-address these dynamics. The therapy described here is about demonstrating a genuine humanness and not to hide behind the implementation of techniques or assessment. Why would the person in therapy “Expose their tender under belly if the therapist plays coy and self-protective?” (Whitaker, 1976, p. 329).

In attending to people’s life stories I have found many who have been labelled as having or being something such as borderline personality disorder, bipolar, psychosis, and many others. Whilst undoubtedly these people on occasion may behave in ways that are non-conformist and disconcerting, these behaviours are often the coping strategies (‘symptoms’) of chaotic and often abusive or traumatic life experiences and I am not convinced that categorisation is a necessary nor helpful practice. I have not been tempted to formally evaluate people along any parameter of mental functioning – it is understanding that is needed here not the assessment and removal of symptoms.

“Every human being is born a prince or princess; early experiences convince some that they are frogs, and the rest of the pathological development follows from this” (Berne, 1966, pp. 289-290).

Moreover, the social and cultural conditions of living are increasingly competitive, time pressured, intense, confusing, and, when added to personal experiences of trauma or neglect, may result in increased instances of psychological distress (e.g., 2014 had the highest number [130] of UK adult student suicides: Office for National Statistics). Often the conditions of living are the cause of distress: the ‘problem’ does not necessarily reside within or as part of the individual. In the words of R. D. Laing, the celebrated radical psychiatrist, “Insanity is a perfectly rational adjustment to an insane world”.

Dali’s “The Temptation of Saint Anthony”

Salvador Dali

The difference between being mad and eccentric, and the acceptance of these in society, is often determined by degree of wealth or success: exemplified by creative genius – see Dali and his The Temptation of Saint Anthony above. The very needy or distressed are not seen as such because of their position in society or their acceptance by favourable sections of the population. Paranoia, narcissism and rigid dichotomous thinking (as indicators or symptoms of distress) displayed by elected world leaders, for instance, would attract the attention of health care professionals under very different circumstances.

Yet these characteristics may also be becoming more prevalent in various societies. Democratically elected leaders are after all voted in by sizable minorities who may recognise, applaud and share those characteristics. In addition, our own habitual practices when set against other cultural standards may not seem to be altogether beneficial or rational including the derogatory categorisation of people according to their level of social conformity. The DSM only completely removed the category of homosexuality (“Sexual Orientation Disturbance”) from the list of disorders in 1987. Therefore it becomes increasingly difficult to separate the so called ‘mentally ill’ from the so called ‘mentally well’.

Key References

Szasz, T. (2011). The myth of mental illness: 50 years later. The Psychiatrist, 35, 179-182.
Whitaker, C. (1976).The hindrance of theory in clinical work. In, P. J. Guerin (Ed.) Family therapy: Theory and practice (pp. 317-329). New York: Gardener Press.
Laing, R. D. (1965). The divided self. Harmondsworth, Middlesex: Pelican Books.
Berne E. (1966). Principles of group treatment. New York: Oxford University Press
Rosenhan D. L. et al (1973). On Being Sane in Insane Places. Science, 179, 250-258
Ehrenreich, B. (2010). Smile or Die: How Positive Thinking Fooled America and the World. London: Granta

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

The Mystery of Precognition

Dr. David Vernon, Senior Lecturer at CCCU describes his latest experiments, with intriguing results that defy explanation.

When learning new material, we all know that rehearsing or practising with the material can generally help if we have to recall it at a later date. This is reasonably straightforward and research tells us that such rehearsal can help strengthen the memory trace. No surprise there.

However, what do you think would happen if you were asked to rehearse the material after you had to recall it? Unsurprisingly, most people would say that rehearsing something after it had been recalled wouldn’t be any help. What is surprising however, is that they might be wrong.

I have now completed four experiments testing the idea that practise in the future can influence performance in the present. Of these experiments, two showed statistically significant effects, where practise in the future led to better recall in the present. Now this may sound odd, and, in fact, if it doesn’t, it probably means you haven’t understood what’s happening, because it is odd. But what does it mean? How can practising something in the future influence performance now? That would be similar to revising after an exam and the revision helping your exam performance!

The short answer is I that don’t know. Some researchers group this type of finding under the more general heading of precognition, which refers to the notion that you can obtain information about future events. However, others think this could simply be a statistical anomaly, or a blip in the data.

What do I think? Well, I think a good scientist remains sceptical yet open minded. After all, the history of science is full of people telling us that something is impossible only for later research to show that such unusual ideas are compatible with newly developed theories or findings.

For now, it remains a mystery, and there is nothing a good scientist likes more than a mystery. . . . .

For more information on precognition research see:
The Society for Psychical Research: https://www.spr.ac.uk/
The Parapsychological Association: http://www.parapsych.org/

Key References

Vernon, D. (in press). Exploring precognition using arousing images and utilising a memory recall practise task on-line. Journal of the Society for Psychical Research.
Vernon, D. (2015). Exploring precognition using a repetition priming paradigm. Journal of the Society for Psychical Research, 79(2), 65-79.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Animal rescues in the spotlight

Dr. Anke Franz, Senior Lecturer at CCCU discusses the regulations, or lack thereof, effecting the wellbeing of animals under the care of animal rescues and sanctuaries.

Before Christmas, BBC’s week in week out programme aired an investigation into an animal rescue in Wales, raising issues around standards and license requirements for animal rescues. “Sanctuaries […] aren’t regulated. Boarding kennels […] need a license because they are governed by the animal welfare act, so too are breeders and come to mention it circuses. As for sanctuaries, well the act doesn’t cover them, so they don’t need a license”. This isn’t a problem if they function well, and many local rescues do excellent work. However, if they don’t, there are limited powers to close them down or prosecute.

So how can we know that the local rescue that we donate money to or where we got our dog/cat/rabbit from, is a good rescue?

A good place to start is the Code of Practice written by the Association of Dogs and Cats Homes (ADCH) to be adhered to by its members.

However, are all necessary standards covered in the ADCH Code of Practice? If not, what standards should rescues adhere to? And who will benefit from inclusion or exclusion of certain standards?

Ana Fernandez, Liz Spruin, Nicole Holt and I have been working with several small independent animal rescues across Kent. As part of this work, we asked them what rescue standards animal rescues should adhere to, which of these are basic standards and which of these are Gold standards – standards rescues should strive towards beyond the basic ones.

The basic standards were:

  1. Rescues have to thoroughly assess all animals that they take on
  2. Rescues have to provide necessary vet care, including neutering/ spaying and micro-chipping
  3. Rescues have to hold a valid insurance, including 3rd party insurance
  4. Rescues have to provide a safe and suitable place to keep the animal, either in a kennel or in a foster environment
  5. Rescues have to complete home checks for all foster and adoption placements
  6. Rescues have to follow up on adoptions, ideally through follow-up visits
  7. Rescues have to offer guaranteed Rescue Back-Up for the duration of the dog’s life (this means that the dog will be taken back by the rescue at any time if the dog cannot be kept by the owner for any reason)
  8. All foster and adoption agreements have to be subject to a contract
  9. At all times, rescues have to be professional and approachable and recognize potential lack of knowledge in fosters and adopters and ensure adequate information and training is provided

The gold standards were:

  1. Rescues should provide ongoing mentoring and behavioural training and interventions to fosters and adopters
  2. Rescues should provide general advice on owning a dog, including costs and nutrition
  3. Rescues should ensure that adequate information regarding potential adopters has been received
  4. Rescues should set up a registry of unsuitable dog owners
  5. Wherever possible, rescues should provide financial and other assistance to struggling dog owners as well as adopters to ensure animals can stay within suitable home environments
  6. Rescues have to provide a high quality of life for all animals in their care, particularly for long-term or permanent residents
  7. Rescues should be transparent about their destruction policies
  8. Forums for social and long-term contact with adopters should be available and fostered.

One topic we discussed at length was euthanasia. Our partner rescues felt transparency regarding destruction policies would be useful. It would allow everybody to know where problem dogs could go without the risk of euthanasia. However, some of our partner rescues had witnessed serious backlash from other rescues regarding decisions that they had been forced to take leading to a reluctance to engage openly with this issue. This was also raised by the ADCH guidance which falls short of requiring rescue organisations to be transparent about their euthanasia policies. Yet, greater openness about realities such as these are vital for arriving at workable standards and regulations.

Generally, there is a lot of overlap between the ADCH Code of Practice and the rescue standards that our partner rescues developed, however, there are some key differences.

Our partner rescues, usually smaller rescues, often run completely by volunteers and frequently relying on foster homes, focused on procedures targeted at increasing success in adoptions, such as conducting home visits for all potential dogs, with dogs often only being rehomed within small geographical areas. In the ADCH guidance home visits are seen as useful but not essential. For many larger dog rescue centres that, for example, take in stray dogs for the council, the reality of euthanising dogs for a variety of reasons is very tangible. This was highlighted by the Sun newspaper in January 2016, when it revealed the numerous dogs being euthanised by large animal rescue organisations such as Battersea, Blue Cross and the RSPCA https://www.thesun.co.uk/archives/news/74549/sun-investigation-we-expose-charities-killing-1000s-of-healthy-dogs-for-growling-too-much/. Rehoming dogs to people living further afield and even on occasion in a different country can equate to having to destroy one dog less, even though the chance of the adoption being successful is decreased.

In addition, the responsibility of a rescue to take back animals at any time when things don’t work out was non-negotiable for our partner rescues while the ADCH strongly encouraged this, but did not enforce it.

If regulations were to be introduced, it should certainly be done in a way that does not penalise good rescues, small or large, and that takes account of the diverse workings of different rescues. While larger, more structured rescues could help smaller rescues to put in place legal structures and auditing procedures, small rescues and sanctuaries can often offer a lifeline to animals that larger rescues might not be able to work with due to problem behaviour, including aggression. They also often have well-functioning informal networks helping them find expert rescue and sanctuary places for difficult or specialised animals needing a second chance.

There are plenty of excellent rescues of all shapes and sizes, including many small local ones, and it is important that they are able to continue their work, especially in an environment where the sheer number of animals requiring rescue places means that rescues have to refuse many more animals than they can help.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Is Tablet Technology the future in Supporting Communication?

Lena Negal, recent CCCU BSc Psychology Graduate, used her final year project to look into the effectiveness of new technological advances over traditional paper based methods in supporting communication with young individuals with ASD. Here, Lena shares the results of her study. The participant is referred to here as ‘Felix’.

Is Tablet Technology the future in Supporting Communication?

The final year BSc Psychology students at Canterbury Christ Church University have worked hard this year and showed that their research interests cover a wide spectrum of fields. One of these final year projects was my case study investigating the use of tablet technology to support communication within a low-functioning adolescent with autism.

As technology is advancing so are our means to communicate as a society. This does not exclude people with Autism spectrum disorder (ASD). ASD effects a rising number of people in the United Kingdom and internationally. Currently there are about 700,000 people in England diagnosed with ASD according to the National Autistic Society (2015). As a results of this rising number of children diagnosed with ASD; parents, teachers and governments are desperately looking to find helpful support systems to ensure that this growing population of people with ASD can develop and live to the best of their abilities. To achieve this, psychologists are looking at traditional methods already employed and attempting to give these methods a technological rebirth.

When diagnosing children with ASD the areas mainly assessed are behaviour, communication and social skills. Speech, which can be argued to be the basis of these three areas, is limited or non-functional in 50% of the ASD population (described as low-functioning) (Boesch, Wendt, Subramanian, & Hsu, 2013). To support communication, alternative and augmentative communication (AAC) systems are often utilised. AACs include sign language, modified or simplified versions of sign language, speech generating devices and communication systems based on pictures or other visuals. One of the most commonly used AAC systems is the Picture Exchange Communication System (PECS) (Frost, 2002).

Image 1:Photo of Felix’s PECS book

Image 2. Photo of Felix’s PECS app

Image 1:Photo of Felix’s PECS book (original name and photo of Felix are covered). The newest app created by the PECS team is called PECS® IV+. It was created to be a digital version of a full PECS book; however, the creators of the PECS underline that the PECS app is a next step in widening communication after a person has mastered the PECS book and is not intended to replace the PECS book (PECS IV+ App: Interview with Dr. Bondy & Frost ,2014).

Image 2. Photo of Felix’s PECS app (original name of Felix was covered) To examine the PECS creators’ prediction, I introduced fifteen-year-old Felix, who had previously used the PECS book as his AAC, to the PECS app. To ensure that communication outcomes were representative of Felix’s use of the PECS devices, the research was conducted with three communication partners in their natural environments. This case study used a 2×3 analysis, reviewing the relationship between the PECS device (PECS book & PECS app) and the communication partners (mother, therapist & teacher). The video observation of the six sessions was coded with INTERACT 14 and analysed on SPSS using a chi-square.

The results showed that the sessions with the PECS book made Felix use significantly more words while the PECS app increased Felix’s use of sounds (which he also uses to communicate). This might indicate that he tried to communicate more with the PECS app by using sounds for meanings he does not have words for. The three communication partners had significant differences in results and therefore the differences between communication partners should be included in future research to ensure real life relevance of outcomes of introducing a new AAC device.

Looking ahead future studies should increase training of communication partners and code their data to get a deeper understanding of the interaction differences. Felix’s family using the PECS app at home and the school’s interest in purchasing tablets to introduce the PECS app to several children with similar needs to Felix, shows that they have seen positive outcomes for Felix. The use of tablets as communication devices for people with ASD could have benefits such as increased vocabulary, increased employability and increased social skills. These outcomes should be reviewed on a bigger scale research and include longitudinal effects.

References Boesch, M. C., Wendt, O., Subramanian, A., & Hsu, N. (2013). Comparative efficacy of the Picture Exchange Communication System (PECS) versus a speech-generating device: Effects on requesting skills. Research in Autism Spectrum Disorders, 7(3), 480-493. Frost, L. (2002). The picture exchange communication system. SIG 1 Perspectives on Language Learning and Education, 9(2), 13-16. National Autistic Society (2015). How does autism affect children, adults and their families? Retrieved April 18, 2016, from http://www.autism.org.uk/About/What-is/Myths-facts-stats. PECS IV+ App: Interview with Dr. Andy Bondy & Lori Frost (2014) Retrieved April 18, 2016, from https://www.youtube.com/watch?v=1v0kmDMwpVI.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Technology and Media in Children’s Development

Suzanne Bartholomew, PhD student in Developmental Psychology shares her view of the Technology and Media in Children’s Development Conference – organised by the Society for Research in Child Development, California, USA.

Using tablets

Technology and Media in Children’s Development
October 27-30,2016, Irvine, California

So here I am at my desk, fresh (fighting jet-lag but feeling like a kid at Christmas) and back from Irvine, California no less. I had been attending a four-day special topic meeting organised by The Society for Research in Child Development (SRCD). Since 1933, this US based society has encouraged multi-disciplinary research on child development and supported the application of such findings in order to benefit both children and families (Hagan, 2000). In the introductory speech, the conference was declared as the SRCD’s first special meeting “SELL OUT” with 100% attendance capacity booked by organiser Stephanie Reich. Not surprising to me as the meeting was titled: ‘Technology and media in children’s development’. Could there be a more relevant topic for conference discussions? (Maybe I carry a slight bias for the topic as a huge section of my research concerns child and parent use of media.)

The timeliness of this conference could not have been more perfect, as The American Academy of Paediatrics (AAP) has recently adapted their recommendations to meet the changing face of media use by younger consumers. Historically, the AAP have stressed the negative aspects of screen time, relying on empirical evidence implying possible interruptions to a child’s typical developmental trajectories. AAP previous recommendations included that parents of children under the age of two years should apply a blanket ban use of all media devices. As we know today, this seems pretty much an impossible guideline for any parent.


With electronic device use, most commonly now the mobile kind, being an integral part of peoples’ lives, it is not any surprise that the UK government (2014) suggests that 87% of UK adults (44.6million) use the internet, up 3.5million since the report in 2011 www.ons.gov.uk. Advances in touch screen technology has enabled younger children to become consumers of this digital world. The newest AAP guidelines acknowledged these changes, scrapping their blanket ban and replacing it with no mention of time limits but instead recommending that online media use for up-to-five year olds should be as much supervised by the parents as the childs’ non-media time (Radesky, Schumacher, & Zuckerman, 2015). Read more details of these changes on the AAP website:


My mission (if I chose to take it, which I so obviously did; in their right mind, who wouldn’t?) was to fly out to California and carry out a two-hour poster presentation titled ‘Digital effects of touch screen technology on focused attention and executive function’ (EF). The poster incorporated two studies. Primarily, research carried out by Dr Amanda Carr in the Canterbury Christ Church observation labs. The study examined attention in a sample of 18 children aged 10-36 months. Attention levels were recorded before and after both free play and touchscreen tablet play. Results concluded that although there was an overall drop in attention in both conditions, there was no significant difference in attention after tablet play or free play. This suggests that playing on a touchscreen tablet may not have such a negative association with attention span as commonly reported.

The second project was my own work. I looked at the association of screen time with EF and classroom behaviour in an older sample of 8-10 years (N = 276). Correlations hinted at a marginal negative association between screen time and EF skills. However, a larger positive effect was found between screen time and classroom behaviour. We concluded for our samples that, with very young children, screen time had no immediate negative effect on focused attention but, as age and screen time increased, so the effect on EF and classroom behaviour become more pronounced. Both studies have follow up research being carried out as you read.

Presenting both studies to an audience of highly knowledgeable and respected academics was an intimidating prospect that I tried to think about logically rather than emotionally. I was unsure which of the two projects I was more scared about explaining; under pressure to explain Dr Carr’s research 100% correctly, I couldn’t get anything wrong! Or my own, which, surrounded by all these highly educated people, I felt was possibly all wrong anyway. But that was to take place on the Saturday evening, and I had two days of conference before then.

I spent those two days sprinting from one set of talks to another. My conference diary of ‘things to see’ was full from 8.30am until 8pm. The conference offered many different types of talks including keynote speeches, hour long lectures, symposia (a ninety-minute session with three or four researchers giving twenty minute talks followed by Q and A and flash talks), an hour long session with five or six talks in rapid succession with limited Q and A—all offering and delivering valuable insights into contemporary media use research. There was much information I could share here but for space limitations I shall concentrate on reporting on keynote speeches.

Opening the conference proceedings were the principle keynote speakers, renowned psychologists Kathy Hirsh-Pasek and Roberta Michnick Golinkoff with their lecture “Putting the ‘Education’ back in Educational Apps” (Hirsh-Pasek, Zosh, Golinkoff, Gray, Robb, & Kaufman, 2015). The room was full to bursting point as the issues of the development and construction of apps designed for the young consumers were brought up. The conference audience was held captivate (me included) as it was explained how so many apps are being marketed as not just entertaining but also educational.
The psychologists explained that such claims are being made without much consideration of the known psychological processes of child learning and development. They offered us the four pillars of learning: a comprehensive framework designed to enable apps designers and app buying parents alike, the ability to deliberate on the apps associated learning outcomes.


(Hirsh-Pasek, Zosh, Golinkoff, Gray, Robb, & Kaufman, 2015).

Summing up the vast amount of information that was given, the four pillars represented the factors required for learning to take place: active, engaging, meaningful and socially interactive. The added necessity was that all four pillars are embedded with the context of guided exploration towards a learning goal. This framework can be applied to almost any contextual learning situation. However, as a critical element of the digital world, the researchers gave examples of supporting evidence. An example for the social interaction pillar was the teaching of a child via three different social conditions; live video chat between a teacher and the child, the child watching a recording of another child having a live video chat with the teacher and lastly the child watching a pre-recorded lesson from the teacher. Hirsh-Pasek and Golinkoff reported their finding with equal significant learning gain from condition one and two whilst no significant gains made in learning for condition three. The other three pillars were highlighted by research examples and much to the audiences delight the lecture was interspersed with amusing clips of educational apps and not so educational apps.

A full PDF of this lecture can be found at :


Doing a bit of name dropping now, further to the fantastic Hirsh-Pasek and Golinkoff session the conference itinerary of keynotes included:

  • Justine Cassell: Addressing the influence of peers and how this intrinsically dyadic interpersonal closeness could be replicated by a child with a virtual friend.
  • Patricia Marks Greenfield: Appling the theory of social change in a world where digital technology is a key aspect of our culture.
  • Ellen Wartella: Addressing public policies and asking how the media can be responsible for childhood issues such as obesity.

I enjoyed all of the above sessions, but as an admirer (dare I say fan?) I was most looking forward to attending the Sonia Livingstone keynote speech. Professor Livingstone was introduced as an Officer of the Order of the British Empire after having been awarded the OBE in 2014 ‘for services to children and child internet safety’. The audience delighted in this description with energetic claps and cheers erupting around the room. In her talk, she asked the question, “What did I learn from spending over a year following around – at home and school, offline and online – a class of 13 year olds from an ordinary urban London school?” Sound interesting? Yes, it was. Professor Livingstone explained her ethnographic study in great detail, describing how this journey challenged popular inferences about teenagers’ use of the media, outlining the conception of the ‘digital native’. Further, she questioned the assumption of a teenager’s need to be constantly connected to the digital world, suggesting, instead, that a teenager’s need to be able to enter the digital world when they wish to grants them agency. As a parent to three teenagers, this explanation has made me start to rethink my worries about their mobile phones becoming an extension of their hands. I am taking on board their possible need to protect their digital life in doing so protecting their individuality.

The accompanying book ‘A Class: Living and Learning in the Digital Age’ (Livingstone, & Sefton-Green, 2016) is top of my Christmas list, I will ask my children to gift it to me!

(Details of work by Professor Livingstone can be found on the LSE website.


And so to the reason I was in the United States: the moment had arrived for this highly stressed but excited researcher to present this ‘well-travelled’ poster (approximately 5,318 miles from London to California). We were on stand number ten; number ten out of forty five! That’s a lot of poster presentations for fellow academics to get through. I nervously stood before our poster wondering if after sitting through ten hours of talks earlier in the day that this would just be too much to ask of my fellow attendees? Maybe this would be one poster session too many?


(Please ignore the distortion of the picture. The poster board was bent!)

Nope! It seemed not. The room was teeming; to my relief, enthusiasm had not waned. Explaining the research and answering questions to many interested researchers made the session a great success. I had completed the mission, I felt extremely proud of our research, a feeling akin to attending the Christmas nativity play when your child has a ‘talking’ role.

And so here I am sitting at my desk still suffering with jet lag but still feeling like a kid at Christmas and planning to put myself through the conference experience again as soon as possible.

Hagen, J. W. (2000). Society for Research in Child Development. AE Kazdin, Encyclopedia of Psychology, 7.

Hirsh-Pasek, K., Zosh, J. M., Golinkoff, R. M., Gray, J. H., Robb, M. B., & Kaufman, J. (2015). Putting education in “educational” apps lessons from the science of learning. Psychological Science in the Public Interest, 16(1), 3-34.

Livingstone, S., & Sefton-Green, J. (2016). The class: Living and learning in the digital age. NYUPress.

Radesky, J. S., Schumacher, J., & Zuckerman, B. (2015). Mobile and interactive media use by young children: the good, the bad, and the unknown. Pediatrics, 135(1), 1-3.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Nonsense, bullshit and constructive dialogues in Higher Education

Dr. Stavroula Tsirogianni, Social Psychology Lecturer with an interest in values, moral dilemmas and perspective taking talks about her experiences of nonsense, bullshit and constructive dialogues in within academia and Higher Education

In June, I went for the first time to the annual conference of the psychosocial studies to give a presentation on a paper I am writing on with a colleague-friend. Psychosocial studies draws on a range of frameworks like psychoanalysis, critical theory, postcolonial studies, feminist and queer theories etc. The conference, as it is not a mainstream one, was quite small and people were very approachable, which made me really happy to be there. What mostly impressed me about the conference was the chairing style in some of the sessions. Some chairs did a brilliant job in terms of creating an informal, open and conversational atmosphere during their sessions, through arranging chairs in circles or through discouraging presenters to stand up and present using PowerPoint or to talk for more than 10 minutes. It worked. People in these sessions were more open and conversations were more relaxed and more interesting and constructive.

My presentation was on the last day of the conference and normally I would be anxious but this time I felt more relaxed because the setting felt safe. When my turn came and I gave my presentation, I asked the 5 people who attended my session for feedback and ideas on specific things that I felt stuck with. A woman from the audience felt really offended by my ideas, because I mixed mainstream and critical psychological theories to talk about how we construct ourselves as ethical beings through common everyday actions and dilemmas such eating our dead pet dog, eating burgers from KFC or wearing leather shoes etc. She found my ideas to be nonsense. She expressed her contempt by rudely interrupting me, asking me to look at her because she wanted to impart her ‘wisdom’ on me, while a woman from the audience was sharing her interesting experiences of dogs as a black person in South Africa during the Apartheid period. What happened in that session was that she wanted to establish her status as an ‘enlightened’ person, who knows better. I did not take this incident personally as this kind of hierarchical interactions, as most of us know who have been in academia for a long time, are common. If this happened to me 14 years ago when I first got into academia, I would be in tears.

…14 years ago…

When I came to the UK I felt like my perception of myself and the world shattered to pieces. I came from Greece where I grew up in a completely different educational system, where the teacher was the all-knowing figure. I was taught to look at myself and at the world in fixed binaries i.e. right vs wrong, rationality vs imagination, individualism vs collectivism. I was mainly trained to look for the truth, reproduce knowledge and produce outcomes through standardized memory testing. Today, I remember very little about the things I learned in high school and university. My education was based on external authority and drills. There was no space for independent action, divergence of views and ambiguity.
When I first came to London and started my PhD at the LSE, I was completely thrown back by the diversity of people and perspectives. I felt ignorant, exposed and lost. I never talked in seminars or classes and if had to talk to someone senior or someone that I thought knew more than me, my heart was pounding from anxiety. I had internalised so much this hierarchical way of thinking that I constantly felt too inadequate to believe in my own ideas.

Finding myself in such a multicultural and international environment challenged my biases, values and worldviews and brought up questions about authority and systems of power, their effects, how knowledge and identities come to be constructed and challenged. My experience of confusion and loss felt like a personal experience that had to be kept separate from the scholarly and educational process. It was through my exchanges and discussions with my peers and not with my teachers that I started addressing the connection of what I was learning and what I was experiencing.

Doing a PhD was a very confusing and lonesome process. But I was not the only one. A lot of my mates from my year felt the same way. It was this experience of isolation and loneliness that brought us together. We spent a lot of hours having discussions, bouncing back and forth ideas about our work, our plans, our anxieties, our aspirations, our lives, academia, about everything over coffee, beer and cigarette breaks – it is when I took on social smoking which then became regular smoking. I clicked more with some than others. Those who I felt closer to were those who I thought would not judge me for my ideas. Very often, our conversations would get very heated and we would end up arguing and feeling frustrated and defensive, but they were still fun, exciting, informal and above all they felt safe. Safe enough to play with ideas through talking ‘nonsense’. I came to realise that talking ‘nonsense’ is important in the process of elucidating thoughts and ideas.

Sadly from my experience in academia during the past 14 years, this type of academic exchanges are quite rare in the formal academic settings of seminars, meetings, symposiums and conferences even in our classrooms. Scholarly dialogues are usually formal, lack excitement and tend to be competitive. Intellectual conversations take the form of wars between egos. There are always people in the audience, who think of themselves as ‘enlightened’ and see their ideas as better than others’ even if their area of expertise is not related to what is being discussed. The aim of such exchanges is to discredit the speaker and to find holes in arguments. Of course as academics we are passionate about what we do, and we do get attached to our ideas as, which we try to protect and defend as our ‘babies’. Even the language that we use to argue about our ideas or to describe our experiences of conversations with colleagues reflects the aggressive nature of academic dialogues. We often use phrases like, this person ‘attacked me’ or ‘attacked my views’ or ‘shot down my argument’. Even the PhD viva is called a ‘defence’.

While being critical is very important part for advancing science and knowledge, the critical view is often associated with justifying theories, providing answers, finding holes in arguments and focusing excessively on details, on a small aspect of an argument. In my view this type of criticism fails to take a discussion to new directions, open new perspectives and generate new questions. Criticism in this context becomes unsafe and threatening since it prioritises cognitive closure, the quest for truth, a convergence of ideas, shutting down the dialogue rather than divergence of views, complexity, collaboration and opening up the dialogue (It is worth reading Alfonso Montuori ‘s work on ambiguity and creativity).

Going back to the idea of nonsense, nonsense is a very important ingredient in critical thinking and imagination. Of course there are different types of nonsense or bullshit (if you are interested in the topic, it is worth checking out Harry G. Frankfurt’s short philosophical essay ‘On bullshit’). According to Frankfurt, there is nonsense that its main motive is pretentiousness and aims to make an argument that suits one’s own purpose and agenda; and then there is nonsense that does not aim at a specific goal. The second kind is related to the concept of play. This type of nonsense allows us to play with ideas, make sense of them, tolerate and explore ambiguity, imagine and generate different scenarios, questions and answers. From a developmental perspective, Vygotsky was among the first psychologists to talk about the importance of play in children’s emotional, social and cognitive development and its contribution to the development of our unique human ability for symbolic representation such as imagination.

The reality is that for academics as well as for students, our passions emerge, develop, evolve and are expressed within certain economic and institutional contexts, which make the concept of play sound ridiculous. I am not trying to promote here a romantic view of academia and universities, where we need to spend hours staring out of our windows into the horizon or at the stars talking bullshit. Although I quite enjoy doing that with friends when I get the chance… I am also not claiming that there is not enough creativity in academia. However, if universities and academia are learning communities where old ideas, theories and assumptions about the world are questioned and new ideas emerge, there needs to be a space for both convergence and divergence of ideas and idea selection.

A learning community is a place of possibilities. I believe that the theories and the research we discuss represent the best possibilities for understanding at the world only at present, but will be replaced by alternative possibilities in the future. It is important to learn how to give up old ideas about our courses, theories and research and unlearn ways of examining and looking at the world.

The same way that societies are not a comforting ‘melting pot’ where different groups peacefully co-exist and agree to disagree with each other, the same goes with academia. Classrooms, seminars, conferences, meetings are not always safe and harmonious and it would be a fantasy to even try to make them harmonious places. All knowledge is constructed against different histories of antagonisms and misuse of power.

However, it is important to try to foster an environment in our classrooms, meetings, symposia that is collaborative and exploratory. Academic inquiries are not only individual mental processes. They are collaborative and science is becoming increasingly collaborative. If we take into account the growing rates of anxiety and depression among students and academics in universities today, the need for community becomes even more important. Generative dialogue and collaboration and not ‘cutthroat’ cooperation are also key skills that our students need to develop in order to become more able to resolve problems when they leave university. A recent meta-analysis of 168 studies across 51, 000 employees in different industries found that leaders reward people who are interested in the success and welfare of the team and organisation rather than themselves only.

The classroom and academia are places, where students and teachers learn how passionate engagement with ideas can lead to conflict, confusion and mistakes. Academia is also a place where like any other place, where we create, accumulate and use knowledge to defend certain positions. Many times we get stuck in these positions, because they feel safe, as they protect our egos and our privileges. However, especially in light of the current climate in higher education, we need to start fostering collaborative contexts, where the discovery and not the justification of positions and ideas is rewarded. We need to cultivate a culture that promotes a view of human beings, the world and knowledge as evolving not as fixed entities. A view of knowledge as an ongoing and evolving inquiry about ourselves and our discipline that at different stages generates answers but also more questions could start to help us reconcile with the idea that it is ok to let go of our ideas, assumptions and positions.

1 – https://www.theguardian.com/education/2016/sep/23/university-mental-health-services-face-strain-as-demand-rises-50
2 – http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/a0013079

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Comfort Dogs and the Spiritual Experience

Nicole Holt, research assistant to Dr. Liz Spruin, has been exploring the use of dogs for rehabilitation and wellbeing, and the benefits that ‘comfort dogs’ can provide

In America, a new role for dogs has been created called “comfort dogs”. These dogs and their handlers regularly go out into the community and interact with people at churches, schools, nursing homes, hospitals, and various social events throughout the community. The label “comfort dog” was suggested by the Lutheran Church Charities organization in Northbrook, Illinois, going on to write that ‘[comfort dogs] share the mercy and compassion of Jesus Christ with…people’.

While compassionate engagement with the community is commendable, some have argued that this work focuses on vulnerable individuals, particularly when the same individuals might be asked for financial donations. There is a wider issue about the safety and welfare of the dogs involved, but so far no substantiated reports of (mis)treatment have emerged.

However, though it is important to note criticisms of the scheme, we should also consider some of the benefits. For instance, these dogs are also used all around the United States to help people in disaster response situations. One also has to applaud the creative use of such animals, as dogs can be a calming influence on individuals who are in need of comfort. It might also be the case that the dogs are able to gain comfort for the people whom they are helping, leading to a mutual benefit.

Moreover, there have been several case studies, alongside anecdotal evidence, demonstrating that engaging with dogs can help reduce stress and anxiety (Holder, 2013; O’Neill-Stephens, 2011; Wells, 2009). Also, many people, for one reason or another, are not in a situation that allows them to own dogs or simply interact with animals. So, just for a few hours, people can have the benefit of connecting with dogs, which contributes to health and well-being, helping people feel connected with a creature who makes no judgement. As long as a religious organisation is subtle rather than forceful in presenting their own beliefs, and are keeping compassion at the centre of their interactions, then it is easy to see the positives in this initiative.

As we rapidly push forward implementing using dogs in the courtroom, ‘comfort dogs’ provide food for thought about the many roles which dogs can potentially play in society.

To find out more about comfort dogs please follow this link to their website: Immanuel Lutheran Ministries


Holder, C. (2013) ‘All Dogs Go To Court: the Impact of Court Facility Dogs as Comfort for Child Witnesses on a Defendant’s Right to a Fair Trial’ Hous. L. Rev., 50, pp.1155.

O’Neill-Stephens, E. (2011) ‘Courthouse Facility Dogs.’ [Online] Available at: http://www.courthousedogs.com/starting_news.html (Accessed 14th Septemeber 2016)

Wells, D. (2009) ‘The effects of animals on human health and well‐being.’ Journal of social issues, 65(3), 523-543.

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather

Artificial Intelligence in Teaching: The State of the Art

Our technician Richard Weatherall attended a talk on Artificial Intelligence in Education (UCL Knowledge Lab/Pearson) on 24th March

The term artificial intelligence (AI) causes a wide range of different associations for people, usually depending on the level of experience or exposure an individual has to such systems. Many still hold the opinion that AI will herald the end of humanity, believing that its advent will make humans redundant, obsolete, or it will even become advanced to a state where it may decide it has no need for humans and begin to act accordingly.

Fear-mongering aside, AI is already becoming embedded in our society, with companies such as Apple, Google and Microsoft all deploying their AI personal assistants (Siri, Now and Cortana) as standard with their products. Historically, there has always been resistance to a paradigm shift such as this, and while it is true some human jobs may be replaced with a mechanoid or synthetic counterpart, the opportunities created are often so vast and diverse even the most prophetic cannot envision the direction of the future unraveling before them.

The International Artificial Intelligence in Education society is concerned with the research and development of the implementation of Artificial Intelligence for learning; and believes well-designed AI, in collaboration with teachers, parents and learners is paramount in maximizing the future benefits AI can offer, which are vast. A recent talk hosted by the UCL Knowledge lab, London, in collaboration with Pearson, highlighted current research in the area, as well as a demonstration of AI currently being deployed to assist learning.

The talk began with a presentation of a review of various meta-analyses looking at use of a number of AI systems/programs in a classroom environment. Results varied slightly between studies, but overall gave a positive view of AI integration. Notably, intelligent teaching systems performed as well as real teachers in one to one (non-expert) tuition. One common theme however, was that despite the varied and positive ways AI systems have been implemented, it was found that there is no AI substitute for classroom experience, with AI systems greatly aiding the teacher, but not being able (or intended) to replace them.

The need for properly developed teaching aids is more important than ever with the government’s plan to academize schools, possibly resulting in a lack of regulated experienced teachers and a focus on the financial bottom line. The bottom line for most teachers, however, is that teachers love to teach; and technology must enhance, enable and empower teachers to this end, which is the goal of AI in Education.

Whatever your opinion of AI, it’s widespread development and use only appears to be increasing. AI therapy programs, for example, are already being deployed to aid mental health in remote areas and populations isolated by war. As computational models of human emotion become more sophisticated and machine learning ever better at detecting the mental state of it user, the need for proper theoretical and experimental data driven designs are paramount. As one guest involved in computing and psychology at the AI.ED talk noted, computer programmers are not inherently psychologists, and psychologists not trained as programmers – a knowledge gap which must be closed in order that AI be safely and productively deployed with maximum benefit for the good of human kind.

Intelligence unleashed: An argument for AI in Education is a publication produced by ULC Knowledge lab and Pearson to inform and educate about the current state of AI.

Some examples of current AI systems being researched and implemented in classrooms, and presented at AI.ED, include Betty’s Brain, italk2learn, Zondle, The Tardis Project and (Whizz education)[http://whizz.com].

Facebooktwittergoogle_plusredditpinterestlinkedinmailby feather