Monday, November 27, 2017

Foucault on Writing; Making Time for Writing

azquotes.com
From Clare O’Farrell’s Foucault site – reposted with commentary at her Refracted Input blog:
Does there exist a pleasure in writing? I don’t know. One thing is certain, that there is, I think, a very strong obligation to write. I don’t really know where this obligation to write comes from… You are made aware of it in a number of different ways. For example, by the fact that you feel extremely anxious and tense when you haven’t done your daily page of writing. In writing this page you give yourself and your existence a kind of absolution. This absolution is indispensable for the happiness of the day… How is it that that this gesture which is so vain, so fictitious, so narcissistic, so turned in on itself and which consists of sitting down every morning at one’s desk and scrawling over a certain number of blank pages can have this effect of benediction on the rest of the day?
You write so that the life you have around you, and outside, far from the sheet of paper, this life which is not much fun, but annoying and full of worries, exposed to others, can melt into the little rectangle before you and of which you are the master. But this absorption of swarming life into the immobile swarming of letters never happens.
Michel Foucault, (1969) ‘Interview with Claude Bonnefoy’, Unpublished typescript, IMEC B14, pp. 29-30; also available as Michel Foucault à Claude Bonnefoy – Entretien Interprété par Éric Ruf et Pierre Lamandé, Paris: Gallimard. CD
It’s a great quote, certainly. I definitely feel the same way if I’ve not been writing for a while. I’ve been asked more than a few times about writing – usually at the end of question sessions after papers, or when I’ve initiated a conversation with graduate students about publishing, or most often over dinner or in the pub. People are sometimes interested in more general questions about writing, but the most common one is ‘how do you write so much?’ The answer is pretty simple: I try to write every day.
When I’ve been at my most busy – as director of postgraduate students at Durham, while in the first year of editing Society and Space – I would schedule writing time, if not every day, then definitely into every week. I made ‘appointments with myself’ for other key tasks too. I would tell people who had access to my diary that they could move the writing or other task appointments, but not reduce them. So they could be at different times of the day or week to accommodate other things, but not disappear.
Clare links to a couple of reviews of books on academic writing that give similar advice – the way to write is to make time to write. Jo van Every says the same here, and links to this useful post on what you can do in thirty minutes. That last one is interesting as the numbers would change for different people, but the principle is good.
But what do you do if you’re not in the right frame of mind to write when that time comes around? This is a common follow-up question. Then you do the mechanical things that writing requires – you open up the notes file and tidy them up, you download journal articles, get shelfmarks for books you need to check out, fill out the inter-library loan forms or locate a library that has it, check the author guidelines for the target journal, print the last draft and read it over for grammar, maybe seeing a link or sparking an idea… You get the point. But it should be something that moves the writing on, however incrementally. Graham Harman has a good post on working on different bits of the project in parallel, so you can move to a different bit if you get tired of one part.
And while it isn’t counting words that matters, think of it this way: Take a 52 week year. Take four weeks holiday. Take three days per week with time set aside for writing. That’s 144 writing days. Write 500 words a day – about the length of this post, without the quote, or a page of a printed text. That’s 72,000 words. Two articles and half a book. So then a couple of articles a year and a book every two or three isn’t exactly Sartre-level words per day madness…

What Quaker Schools can Teach the Rest of the Class About Equality, Mutual Respect and Learning

by Nigel Newton, University of Bristol, The Conversation: https://theconversation.com/what-quaker-schools-can-teach-the-rest-of-the-class-about-equality-mutual-respect-and-learning-86657

File 20171115 19841 hgw79c.jpg?ixlib=rb 1.1


Shutterstock

The head of England’s schools inspectorate believes that British values, including tolerance, openness to new ideas and mutual respect, should form a central part of school education.

Amanda Spielman, the new Ofsted chief, said the education system has a “vital role in inculcating and upholding” these values. She went on to praise one school which promotes inclusiveness, and another where a “values-focused” thought each day informs teaching.

But the very subject of teaching values in school can be problematic. Whose values are really being taught? How will a school’s performance of this duty be measured? Others think we should step back from the question of “British” values and focus on helping children develop a “virtuous” character.

But what happens when an entire school culture is seen by its students as promoting equality, mutual respect and inclusiveness?

New research reveals a significant relationship between Quaker school values and their students’ engagement with learning opportunities. Quaker schools are not common (there are ten in the UK and Ireland, 100 in the US), but they exist in 15 countries around the world. Some are very well established and highly thought of – both the Clintons and the Obamas sent their children to a Quaker establishment, Sidwell Friends School, from the White House.

There are several things which make the English Quaker schools involved in the research distinctive. First, they all hold a “Meeting for Worship” which looks similar to a traditional school assembly in which the whole school gathers. Everyone sits in silence and all have the opportunity to address the room. This practice underscores another distinctive feature, which is that Quaker schools assert that everyone is equal. Schools try to reflect this in the way they listen to students and encourage positive relationships between year groups and between students and staff.

Although independent, Quaker schools rarely admit students based on academic selection. Quakers believe there is “something of God in everyone”. They actively encourage inclusiveness and stress that each student will grow and develop in their own way.

Yet counter-intuitively, students often perform very well in exams and the schools punch above their weight in academic results. So do aspects of the Quaker school culture contribute to students’ successful learning?

We found that students who were more likely to study without being told to and who enjoyed and took more interest in their subjects were the ones who also saw their schools as places characterised by friendliness, an equalitarian ethos and somewhere they rarely felt pressured. These students also tended to value the Quaker practice of silence and the weekly all-school Meeting for Worship, in which anyone can share a thought or express an opinion.

Interviews with students revealed how friendly relationships create strong bonds of trust, grounded in mutual respect and the Quaker belief in equality (perhaps surprising given that only 3% of students and 8% of teachers at the schools come from a Quaker background). 

Students recognise teachers as supportive and “on their side”, which leads to honest conversations about their studies and feeling of increased responsibility for their own learning.

One Year 10 boy said:
If you have a good relationship with the teacher or you are more friendly, then it is easier for you to get into the subject and learn more.
A girl from Year 9 told us:
I think the Quakerism influences us a lot. I think that’s what gives a lot of the friendly environment because you know that you’re equal whoever you are.
The Meeting for Worship was seen as providing an opportunity to reflect, contributing to the relaxed atmosphere of school. But it also confirmed the place of students’ voices and the importance of community. This helped students feel they can be themselves, and supported to do the best they can – although this “best” was not confined to examination performance.

A working relationship

According to one female student, the friendly atmosphere “helps you learn more, because you feel under less pressure to understand [the subject] straight away”.

Interviews with teachers confirmed the perspectives of students. They felt there was a focus on providing a wide and varied education, which was not defined principally in terms of exam grades. Many teachers referred to their sense of freedom to teach students as individuals, without feeling pressured by evaluations.
“The children are allowed to be themselves, but we are as well,” said one. “Everyone is welcomed and tolerated so it is a very accepting environment, and that makes for a very pleasant environment to teach in.”

Several factors linking back to the Quaker belief in equality and their practice of open worship, appear to help explain the relationship between students’ willingness to engage with learning and their lack of anxiety in relation to study, as well as their ability to make the most of the support offered by teachers. In particular, there seems to be a relationship between the inclusive ethos of the schools and an orientation towards educational engagement in students.

In seeking to explain these relationships, we’ve come to see that inclusiveness may be important to education because learning is really about being open to receive “the other”. Curriculum content is one of these “others”. Students who have been encouraged to practice inclusiveness towards fellow students – and have seen this role modelled in their teachers – become more disposed to receive the “otherness” of new learning opportunities.

The ConversationSpielman may be on to something in her desire to see values play an important role in school education. But the challenge will be to help schools adopt cultures where those values are authentically – and visibly – practised.

Nigel Newton, Assistant researcher, University of Bristol

This article was originally published on The Conversation. Read the original article.

Saturday, November 18, 2017

ONLINE COURSE: GAMSAT Essay Secrets

Are you trying to get into the medical school of your dreams? Do you need an advantage over your competitors? Have you sat the GAMSAT exam previously only to bomb out in the essay section?

Dr Robert Muller has created a GAMSAT essay writing strategy that has been devised over the last 10 years in response to the main problems that candidates face in writing the GAMSAT essays.

This course, "GAMSAT Essay Secrets" provides a detailed essay writing strategy which is completely unique, but which the GAMSAT examiners respond to VERY positively when the strategy has been mastered and used well in the exam.

The rationale for this approach is that the overwhelming majority of GAMSAT essay writers construct their essays according to the overall theme of the five statements provided in the exam (for each essay).

Instead, Dr Robert's strategy is one of responding to ONE single statement, arguing/discussing VERY directly, and using examples skilfully.

The question is: If you want to put yourself above the majority of candidates, you need to take a different approach to your essay writing. If the examiners see that 95% of candidates are writing their essays in the same way, and then along comes your essay which has taken a completely different approach, this makes them sit up and take notice. If you master the strategy presented here, this will give you a significant advantage over your competitors.

Don't forget that in addition to the online course, you also get feedback and guidance on 10 GAMSAT practice essays at no additional cost (valued at $300).

Check out the course at:

https://premiumcoaching.withcoach.com/gamsat-essay-secrets-fb8d8d53-a697-4ee7-b009-b2c0c5dde53b

Tuesday, November 14, 2017

The PhD: Notes to My Younger Self

As PhD students, we tend to live day-to-day while keeping in mind the potential of a future in academia. We leave little room to think about how we might frame today’s experiences when they become our past. Dr. David Whillock, who finished his doctoral research in 1986, reflects on the lessons he has learned after 30 years in Higher Education…
They say that hindsight is 20/20.  There is a lot of truth to that. As I get to the end of a long and wonderful career in higher education, there are several things I wish I had known while going through the Ph.D. process and things I wish I had known as an Assistant Professor attempting to gain a reputation and building a case for tenure. I’ll pass these along in hopes I may be able to tap into some of your concerns, frustrations, or hopes.
My best advice to those who are just entering into doctorate programs is to have a passion for and to focus on the subject for your dissertation. The first thing you need to do, I mean the first, is to find an advisor/supervisor that you identify with and will accept your premise and methods of your subject matter. You don’t want to start a program attempting to “change the mind” of your dissertation advisor. That is a long and losing battle you don’t need. Trust me, in defending your dissertation, you want to make certain your advisor is fully on board with your content, method, and findings. Then is not the time to argue a point, but to enlighten the life of the mind.
While in your program, use every opportunity to move your dissertation forward. Attempt to make every class/conference/journal paper an opportunity to use your content and/or methodology of your dissertation.
One more thing, remember you are writing a dissertation, keep that goal clearly in your head. The book will come later, if you can’t finish a dissertation, you won’t earn a Ph.D. Dissertation first, book later. Some colleges and universities won’t even count your dissertation toward tenure, even if it is in book form. So, focus, focus, focus.
When I served as Chair and Dean, many of my new hires were eager to make a name for themselves in their field of study and in the classroom. That level of energy is a good thing. I would suggest being strategic in this desire to make certain that, if you want tenure at your institution, you have a higher chance of getting it.  As Chair, I asked my “junior” faculty to resist volunteering for everything, or anything really that does not move you forward in your desire. Have the Chair help you be selective in the committees you serve on. You want to be known on campus outside of your department and college. Many serve on the University’s committee that will approve, or not, your bid for tenure. Choose wisely. The same applies as a Ph.D student: be strategic and selective.
Interestingly enough, I tell first year students the same thing I would tell my new faculty members: manage your time. It is imperative that you literally put on your calendar time to research and reflect. Take a walk… visit faculty from other departments outside of the building you are working in. Some of my better ideas come from faculty colleagues outside of my discipline. Indeed, several collaborative opportunities have come from these walks. But most important, a clear head and knowing the world will operate and be fine without you for a period of time is important.
One last thing, get balance in your own life. Anyone in any working environment who doesn’t have a hobby nor life beyond the academy, will eventually be lost. I have a lot of colleagues well into their 60’s who have no plans for life beyond the academy. I want to stress the importance to balance your life with people, events, and activities beyond the academy. Eventually even the best faculty realize it is time for a new generation of scholars to take the stage and push a new group of students to excellence. Stay relevant in your scholarship, but “get a life”.
Are there things you already wish you could tell your younger self? Have you been actively selective and strategic during your PhD Life? Tweet us your advice at @ResearchEx, email us atpgcommunity@warwick.ac.uk, or leave a comment below.
Dr. David Whillock is the Associate Provost and Dean of the Academy of Tomorrow. He holds a Ph.D. in Critical Studies from the University of Missouri.  His specialization in teaching and research include History and its Depiction in Cinema, The American Vietnam Film, A Cultural Perspective on the Blues, and Ways of Knowing.  He is the guitarist for the South Moudy Blues Band.  He is published in the Journal of Film and Television, The Journal of Popular Culture, and Southern Communication Journal. He has contributed chapters in America Rediscovered: Critical Essays on Literature and Film of the Vietnam War, Hate Speech, and Vietnam War Films.

Demand for People Skills is Growing Faster Than Demand for STEM Skills

by Claire MasonCSIROAndrew ReesonCSIRO, and Todd SandersonCSIRO, The Conversation: 
https://theconversation.com/demand-for-people-skills-is-growing-faster-than-demand-for-stem-skills-86754

File 20171109 14167 17phj7s.jpg?ixlib=rb 1.1

High level interpersonal and problem solving skills are what will make you employable in a digital world. Shutterstock

Advances in digital technology are changing the world of work. It has been estimated that more than 40% of human workers will be replaced by robots. This probably overstates the scale of displacement, but developments in the fields of artificial intelligence and machine learning will affect all sectors of the economy.

However, the impacts of digital disruption will not be evenly distributed. Previous waves of technology had the greatest impacts for workers in routine jobs, but now a growing number of roles may be at risk.

Even so, workers whose skills complement but are not substituted for by technology can use the new technology to be more productive and command higher wages.

What types of skills will ensure you are employable in the world of human and robot workers?

Two recent reports, “The VET Era” and “Growing Opportunities in the Fraser Coast” challenge the rhetoric around the importance of STEM skills in the digital economy, by revealing how demand for skills has changed over time.

1. Increasing demand for highly skilled workers

These analyses show a major shift in the skills profile of the Australian workforce. The Australian Bureau of Statistics (ABS) classifies occupations into skill levels based on the amount of training and experience required to perform the job.

In 1986, the largest group of workers was in occupations classified as skill level 4 (roughly equivalent to a certificate II or III). Since then, demand for highly skilled workers has grown rapidly. Nowadays, the largest group of workers is in the highest (skill level 1) category - occupations requiring a bachelor degree or higher qualification.



Essentially, increased reliance on technology in the work environment raises demand for more highly skilled workers, because the more routine work is automated. While it is good that more of us are working in more rewarding jobs, not everyone has benefited from this shift. Nor can the current winners in the digital economy afford to be complacent. As the capability of digital technology increases, a growing range of tasks (such as data analysis and diagnosis) can be automated.

So what types of skills should we be developing when we invest in the higher qualifications that are now required in most jobs?

To answer this question, we linked Australian employment data with United States data on the skills and abilities associated with different occupations.

By linking these datasets, we could estimate (based on the changing occupational composition of the Australian workforce) which skills and abilities were becoming more or less important. For simplicity, we have grouped these skills and abilities into four categories: traditional Science, Technology, Engineering and Maths (STEM) skills, communications skills, technical skills and generic STEM skills.

2. Communication and people skills are increasingly important

The analyses reveal that, despite all the hype about STEM skills, occupations requiring communication skills are actually growing fastest.



As our work becomes increasingly technologically enabled, human workers differentiate themselves from machine workers through their ability to connect, communicate, understand and build relationships. Most of us now work in the services sector. This is the sector that will continue to grow as the population becomes older and wealthier, as we up-skill and re-skill more often, and as the incidence of mental disorders, chronic diseases and obesity continues to rise. The delivery of these services requires people-focused skills such as active listening, empathy and teamwork.

3. Programming skills are less important than digital literacy

Given that coding is now part of the curriculum for Australian primary school children, it may be surprising to learn that growth in demand for communication skills actually outstrips growth in demand for STEM skills. More detailed analyses provides further insight into the way demand for STEM skills has been evolving.



What they reveal is that the STEM skills needed in a wide range of contexts and roles are those that involve working with (rather than programming) technology - skills such as the ability to think critically, analyse systems and interact with computers.

More traditional STEM skills (such as physics, mathematics, and programming) have been experiencing relatively low growth. In fact, recent research from the United States found that there has been a slight decline in the number of traditional STEM jobs since 2000.

Although traditional STEM skills are important, they are only needed by a relatively small number of highly skilled professionals - perhaps because programming work is itself able to be automated and sent offshore.

These STEM professionals also tend to achieve higher incomes if they combine their technical expertise with strong social skills, allowing them to make the connection between technological capability and social needs. While the most skilled coders will continue to have great opportunities, most of us will just need to be able to work with technology. People skills will continue to become more, not less, important.

As the capability of technology continues to develop, human workers need to focus on building skills that complement technology. High-level interpersonal and problem-solving skills are not so easily automated. Given that we will need to find new jobs to replace those lost to the robots, we also will need entrepreneurial skills to create and grow the new economic opportunities enabled by these developments.

The ConversationAs technological advances occur ever more rapidly, we will need to keep discovering new ways of using technology to perform our work. With strong communication, problem-solving and digital literacy skills, we can harness the power of digital technology to solve a customer’s problem, grow productivity and improve our world.

Claire Mason, Data61 Senior Social Scientist, CSIRO; Andrew Reeson, Economist, Data61, CSIRO, and Todd Sanderson, Research Scientist in Digital Economics, CSIRO

This article was originally published on The Conversation. Read the original article.

Monday, November 6, 2017

Tutors Are Key to Reducing Indigenous Student Drop Out Rates

Source: futurestudents.curtin.edu.au
by Lesley Neale, Curtin University, The Conversation: https://theconversation.com/tutors-are-key-to-reducing-indigenous-student-drop-out-rates-86130

There has been an increase in Australian Indigenous students enrolling in university in the past 10 years. While this is good news, there has also been a high drop out rate among first year Indigenous students.

How universities address retention rates

Universities address student drop-out rates through retention policy initiatives such as peer to peer mentoring programs. Faculties or schools develop further retention strategies appropriate to their cohort. One successful support strategy for Indigenous students that is already in place and effective according to students and higher education bodies, is the Indigenous Tertiary Assistance Scheme (ITAS).

ITAS has been around for 28 years, providing tutors for Indigenous students. I have worked as an ITAS tutor for 25 of those years, and have conducted interviews with many students who engage with the program. Working with the students and observing their progress suggests that ensuring all students have a tutor (especially in their first year) would lower the drop-out rate.

ITAS is funded directly from the Office of the Prime Minister and Cabinet as part of the Indigenous Advancement Strategy, introduced in 2014. The cost of extending ITAS would be absorbed by the Office of Prime Minister and Cabinet, and outweighed by higher student retention and an increase of university fees. A greater number of Indigenous students gaining degrees has the advantage of lowering Indigenous unemployment figures, since statistics show that graduates are able to find work very quickly.

The first year is challenging

University can be a daunting place at first for anyone. Many Indigenous students say university culture is like a foreign culture, and those from rural and remote communities in particular have difficulty adjusting to it. 44% of the students surveyed cited the reason for dropping out as financial. However, feedback suggests that stress, workloads and study/life balance, mentioned by the wider student cohort, need to be addressed. With appropriate support, the academic and personal challenges faced by students can become manageable. The current drop-out rate –twice that of other first years – disempowers both Indigenous communities and Australia as a whole.

Larger institutions such as Curtin University and the University of Western Australia, with cohorts of 400 to 600 Indigenous students, usually have 80 or more tutors available to work with students for two hours per academic unit per week. A larger number of tutors and more flexibility in how tutor hours are allotted would be beneficial.

Student experiences

Many students readily see the advantage of working with a tutor, but others attempt to go it alone. Students who come late to ITAS often regret not using the scheme earlier. One commented:
A good tutor can switch a student on to studying.
Students credit tutoring sessions with enhancing their ability to negotiate academia and successfully complete degrees. ITAS tutoring offers both academic assistance and mentoring. One student told me:
The feedback and support helps me feel more confident. It stops me from doubting myself.
Another explained:
I appreciated having someone to listen to my ideas, challenge me and support me.
Students may not have a clear understanding of exactly what is required of them. A student said he was exposed to skills he never knew he needed, and another commented on needing time-management skills, and help staying focused.

Students place importance on learning to “code switch”: having the ability to change between everyday speaking and writing, to academic language. Indigenous students may speak Aboriginal English, Kriol, and an Aboriginal language. Often they speak all three. Effective code switching bridges the gap and provides the student with the tools to understand the requirements of an assignment and how to complete them successfully.

An interview with Indigenous students from the Aboriginal Studies Students Program. Lesley Neale, Author provided (No reuse) 4.47 MB (download)

Working strategies

The learning environment provided by ITAS tutor sessions is quite different from that of a seminar or lecture, apart from the one-on-one aspect. ITAS tutors don’t teach course content. They facilitate strategy development, help assignment planning, and suggest ways of working. Sessions focus on a student’s area of need, and draw on their strengths such as verbal competence, creativity or life experiences.

Strategies such as “yarning” are effective when working with Indigenous students – and indeed, all students. Many tutors instinctively use these practices. The informality of yarning, or sharing information, establishes relationships and inspires collaboration. In tutor/student relationships, this leads to mutual respect and builds a learning space for discussing problems, sharing ideas and engaging with the intellectual rigours of a degree. One student said:
Spending time with my tutor provided time to question academic theories, practice critical thinking and work on my research skills.
Effective tutoring encourages students to challenge themselves. A Master’s student explained:
It’s not just about passing the units; I want to own the skill set. Own my work.
The yarning-style sessions, offer a learning space that fosters intellectual growth, benefiting students beyond the years at university. The Indigenous Advancement Strategy, states:
The positive impact that education has on the future success of individuals, families and communities is clear. Children who go to school have better life outcomes.
The ConversationWe need to ensure that Indigenous students who earn the right to be at university can take full advantage of the opportunity. Tutoring, if available to more students, especially first years, can play a vital role in preventing the drop out rate. ITAS tutors offer academic tuition and mentoring and, according to students, are uniquely positioned to help them reach their full potential.

Lesley Neale, Adjunct Postdoctoral Research Fellow, Curtin University

This article was originally published on The Conversation. Read the original article.

Tuesday, October 31, 2017

Higher Education Cuts Will Be Felt in the Classroom, Not the Lab

by Michael WhelanSouthern Cross University, The Conversation:
https://theconversation.com/higher-education-cuts-will-be-felt-in-the-classroom-not-the-lab-86400

File 20171026 28083 rpss5q.jpg?ixlib=rb 1.1
Teaching-focused academics are often considered to be “lesser” academics. Shutterstock

In a recent Productivity Commission report, the bias of universities in favour of research over teaching was exposed.

The proposed higher education reform that would have seen A$380m cut from university funding was rejected by the Senate, but the word is that Education Minister Simon Birmingham has return to the bunker to develop a new strategy. The most likely scenario is that vice-chancellors will need to cut costs, and we know where the axe will fall. Teaching-focused academics will be the hardest hit, and the cuts will be felt in the classroom rather than the research laboratory.

What is a teaching-focused academic?

The term teaching-focused academic has been used to include teaching-only academics, teaching-focused academics and teaching-intensive academics. The number of teaching-focused academics in Australia has increased from 755 in 2005 to 3696 in 2016. The number of teaching-focused academics is also increasing in the UK and Canada.

In Australia, the rise of the teaching-focused academic is credited to universities seeking to increase their Excellence in Research (ERA) rankings. Poor performing teaching-research academics tend to become teaching-focused academics to maintain ERA rankings.

Teaching-focused academics are often considered to be “lesser” academics (Academicus minor). While evidence of research success is measured by volume (number of publications and research income), evidence of teaching scholarship is less quantifiable.

For example, 84% of academics consider teaching is important, but 29% believe teaching is rewarded in promotion. The data support their perception, with less than 10% of teaching-only academics above senior lecturer level, while more than 30% of teaching-research academics are above senior lecturer level.

Even when a teaching-focused academic is recognised for teaching excellence, it may not be acknowledged by their peers, or they may be subject to ridicule from other academics.

“Rank and sack” method shows bias against teaching-focused academics

Teaching-focused academics are more likely to be made redundant. Vice-chancellors tend to use the “rank and sack” method to protect researchers. Academics are ranked on the basis of research volume, and those individuals below a certain threshold are sacked.

A twist to the “rank and sack” method is to give the academic the option to become teaching-focused. An attitude of “anyone can teach” prevails. The departure of teaching-focused academics is felt in the classroom. These are the academics who keep up-to-date with technology, current trends in assessment practices and curriculum development.

University recruitment is focused more on research performance than teaching performance, to the detriment of teaching. In Australia, permanent research-only academics outnumber teaching-only academics four to one. Teaching-focused academics are further marginalised by casual employment. 82% are casual employees.

Over the last decade there has been a significant increase in casual staff, primarily to support teaching. When a tenured position becomes available, an academic with a track record in research is often appointed rather than a teaching-focused and, most likely, casual academic.

In Canada, universities hiring a research academic with a proven record rather than a popular teacher for a tenured position led to a petition from students. The popular teacher’s contract was extended.

Not renewing casual contracts is an easy fix for a manager who needs to cut costs. It isn’t so easy on the academic who relies on the income. Recently, an academic who had worked as a casual academic in Sydney for 15 years and was passed over for tenured positions committed suicide.

Cultural bias against teaching-focused academics is national

At a national level, there is further evidence that teaching is not valued at universities. The Australian Research Council (ARC) distributes much of the category one research funding to universities. It started in 1946. In contrast, the Australian government’s teaching and learning body started as the Carrick Institute in 2006, and was renamed Office of Learning and Teaching (OLT). The OLT was shut down in June 2016. What would be the reaction to dissolving the ARC?

In 1992 Ruth Neumann, after interviewing heads of department and university executive, revealed a cultural bias against teaching-focused academics. Knowledge of the discipline was valued more than teaching skills. The following quotes are from her report:
academics involved in research were described as being: alert, enthusiastic, excited, keen, curious, fresh, and more alive.
the teaching of those academics not involved in research was described as: repetitive, dull, unstimulating, unexciting, dry, sterile and stagnant.
This cultural bias against teaching-focused academics may not be so explicit, but statistics regarding casualisation, poor promotion prospects, redundancy priorities and the attitude to teaching awards indicate that very little has changed. This bias still exists.

The ConversationGiven this, it is easy to predict the outcome of any cuts to university funding. Teaching-focused academics will be sacrificed. Casual contracts for teaching-focused academics won’t be renewed. Tenured teaching-focused academics will be made redundant. The teaching load of academics who don’t have time to do research will be increased. But ERA rankings won’t be affected and the lights will still burn bright in university research laboratories around the country.

Michael Whelan, Lecturer in Environmental Science, Southern Cross University

This article was originally published on The Conversation. Read the original article.

Monday, October 23, 2017

Why Marking Essays by Algorithm Risks Rewarding the Writing of 'Bullshit'

by Kai Riemer, University of Sydney, The Conversation: https://theconversation.com/why-marking-essays-by-algorithm-risks-rewarding-the-writing-of-bullshit-85910

You may have heard that algorithms will take over the world. But how are they operating right now? We take a look in our series on Algorithms at Work.

File 20171019 1052 1lnjyxt.jpg?ixlib=rb 1.1

Will marking algorithms really reward good writing? Terence/Shutterstock


Picture this: you have written an essay. You researched the topic and carefully constructed your argument. You submit your essay online and receive your grade within seconds. But how can anyone read, comprehend and judge your essay that quickly?

Well, the answer is no one can. Your essay was marked by a computer. Would you trust the mark you received? Would you approach your next essay with the same effort and care?

These are the questions that parents, teachers and unions are asking about automated essay scoring (AES). The Australian Curriculum, Assessment and Reporting Authority (ACARA) proposes to use this program to grade essays, like persuasive writing questions, in its NAPLAN standardised testing scheme for primary and secondary schools.

ACARA has defended its decision and suggested that computer-based marking can match or even surpass the consistency of human markers.

In my view, this misses the point. Computers are unable to genuinely read and understand what a text is about. A good argument has little worth when marks are awarded by a structural comparison with other texts and not by judging its ideas.

More importantly though, we risk encouraging the writing of text that follows “the script” but essentially says nothing of worth. In other words, the writing of “bullshit”.

How does algorithmic marking work?

It’s not entirely clear how AES functions, but let’s assume, in line with previous announcements, that it employs a form of machine-learning.

Here’s how that could work: a machine-learning algorithm “learns” from a pool of training data – in this case, it may be “trained using more than 1,000 NAPLAN writing tests scored by human markers”.

But it generally does not learn the criteria by which humans mark essays. Rather, machine learning consists of multiple layers of so-called “artificial neurons”. These are statistical values that are gradually adjusted during the training period to associate certain inputs (structural text patterns, vocabulary, key words, semantic structure, paragraphing and sentence length) with certain outputs (high grades or low grades).

When marking a new essay, the algorithm makes a statistical inference by comparing the text with learned patterns and eventually matches it with a grade. Yet the algorithm cannot explain why this inference was reached.

Importantly, high grades are awarded to papers that show the structural features of highly persuasive writing – papers that follow the “persuasion rulebook”, so to speak.

Rewarding bullshit

Are the claims by ACARA that algorithmic marking can match the consistency of human markers wrong? Probably not, but that’s not the issue.

It’s possible that machine-learning could reliably award higher grades for those papers that follow the structural script for persuasive writing. And it might indeed do this with higher consistency than human markers. Examples from other fields show this – for instance, in the classification of images in medical diagnosis. It will certainly be quicker and cheaper.

But it will not matter what a text is about: whether the argument is ethical, offensive or outright nonsensical, whether it conveys any coherent ideas or whether it speaks effectively to the intended audience.

The only thing that matters is that the text has the right structural patterns. In essence, algorithmic marking might reward the writing of “bullshit” – text written with little regard for the subject matter and solely to fulfil the algorithm’s criteria.

Not simply lying, analysts use “bullshit” to describe empty talk or meaningless jargon. Princeton philosopher Harry Frankfurt argues that talking bullshit may actually be worse than lying, because the lie at least reaffirms the truth:
It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it … For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says.
Unlike humans, algorithms are incapable of truly understanding when something is nonsense rather than genuine ideas and argumentation. It doesn’t know whether a text has any worth or relationship to our world at all.

That’s why algorithmic marking, whether in NAPLAN or otherwise, risks rewarding the writing of bullshit.

Encouraging the wrong thing

Our politics, businesses and media are already flooded with empty arguments and jargon. Let’s not reward the skill of writing it.

Any application of algorithmic decision-making creates feedback loops. It influences future behaviour by rewarding and foregrounding some aspects of human practice and backgrounding others.

This is particularly the case when incentives are tied to the outcomes of algorithmic decision-making. In the case of NAPLAN, we know that the government rewards schools that score highly. As a result, there is already an entire industry geared towards “cracking the script” of NAPLAN in order to secure high marks.

The ConversationImagine what happens when students realise that genuine ideas and valid arguments are not rewarded by the algorithm

Kai Riemer, Professor of Information Technology and Organisation, University of Sydney

This article was originally published on The Conversation. Read the original article.

Monday, October 16, 2017

A PhD Nightmare: How a ‘Safe’ Paper Turned Into a ‘Horror’ Paper


Recently the last paper from my PhD has been accepted for publication. The paper describes the impact of current and potential future land-use intensification on bird species richness in Transylvania, Romania. Although the paper is maybe not groundbreaking, I always thought that it is still a relevant contribution to the scientific literature, based on our large field efforts, its statistical soundness and because it was well written. A solid paper. But instead, getting the paper published has been a tough ride. 
While we thought bats were difficult to publish (see our previous blog post on a rejection journey five years ago), we have now seen that birds can be even harder to get into journals. Ironically, this paper was considered the ‘safe paper’ of my PhD work. I was one of those lucky students that was part of a well-planned research project including great supervision. The bird work of my PhD was carefully planned and designed, was based on pilot studies and was set in a region rich in (protected) bird species. Very soon, however, my ‘safe’ paper turned into my ‘horror’ paper, with high levels of frustration, a shattered confidence, and – in the end – lots of sarcasm and laughter.
Here goes the story how my ‘safe’ paper was turned into my ‘horror’ paper.
Journal 1: Submitted Dec 2013, rejected with review Feb 2014: Lacking novelty and generality, and lacking clarity and focus of the analysis.
Journal 2: Submitted Feb 2014, rejected with review Mar 2014: Too broad discussion and lacking strong conclusions/management recommendations.
After these first two rejections, we made major changes to the manuscript. We narrowed down the manuscript considerably by deleting a part on species traits, and worked on the clarity of our methods section.
Journal 3: Submitted May 2014, rejected without review: Not general enough in concept, scope and approach.
Journal 4: Submitted May 2014, rejected with review Sep 2014: Lacking novelty.
Journal 5: Submitted Oct 2014, rejected with review Dec 2014: Lacking novelty, and lacking clarity in the methodology and results. As one reviewer put it: having a more complicated and complex design than other studies should not stand for novelty in scientific research.
By the time the paper was rejected 5 times I was pretty desperate and frustrated to hear over and over that the study lacked novelty. I figured that we couldn’t change that much on the novelty of our study’s outcome. However, another frequent critique was around the clarity of the methods and results, something I thought we could improve. Therefore, to give the paper a new and fresh boost, we received help from a new co-author. We re-analysed the entire paper focusing solely on species richness (taking out a part on bird communities), rewrote the entire paper for clarity and to put into a broader context, and even put in some pretty pictures to illustrate traditional farming landscapes. Now with our paper in a new jacket I was convinced we would be luckier in the review process.
Journal 6: Submitted Jun 2015, rejected with review Aug 2015: Methodology limited the study’s conclusion and its capacity to go beyond a regional example. For example, it was critiqued that the model averaging approach used poses limitations and regression coefficients should be used instead.
Journal 7: Submitted Aug 2015, rejected with review Sep 2015: Flawed study design which was deemed uncorrectable without significant reanalysis. Although reviewer 1 had significant problems with our study design, reviewer 2 seemed to be less unhappy: The study is well introduced (I particularly liked the introduction of traditional farming landscapes), the study design is appropriate, the analyses generally robust (although please see comment below), and the results clear, and the discussion well considered.
Journal 8: Submitted Nov 2015, rejected with review Dec 2015: Methodology – given our objectives and sampling design we used the wrong analytical unit.
Journal 9: Submitted Jan 2016, rejected with review Feb 2016: Lack of novelty, trivial findings and not taking into account the rarity of species (something we had excluded from the manuscript due to other reviewer comments).
Journal 10: Submitted Feb 2016, rejected with review June 2016: Goal of the work not addressed.
Journal 11: Submitted Sep 2016, Minor revisions Jan 2017, Submitted revised manuscript Jul 2017 (after maternity leave), Accepted Jul 2017. Hurrah, the reviewers liked the paper a lot!!
Having had 10 rejections on this paper, mostly after review, means that approximately 25 (!) reviewers were involved in getting this paper published. Importantly, of those reviewers probably half of them could have been satisfied with major revisions. Like in the example under journal 7, usually one of the reviewers did not dislike our paper that much, but I guess one more negative review is enough for a rejection. 
Even more interesting, we published two similar papers on butterflies and plants from the same region, based on the same study design and using similar analysis. While this paper on birds got continuous critique that our methodology was not clear, flawed, or limited, these other two papers on plants and butterflies received positive constructive reviews without much complaints about its novelty and/or study design. I am still not sure why this paper had such a hard time, is it just birds or something else, but I am happy it is finally out there! Enjoy the reading and you can always contact me for further clarifications on its methods or novelty J.

The IQ Test Wars: Why Screening for Intelligence is Still So Controversial

by Daphne Martschenko, The Conversation: https://theconversation.com/the-iq-test-wars-why-screening-for-intelligence-is-still-so-controversial-81428

File 20170921 21016 ld7zty.jpg?ixlib=rb 1.1

For over a century, IQ tests have been used to measure intelligence. But can it really be measured? via shutterstock.com
Daphne Martschenko, University of Cambridge
John, 12-years-old, is three times as old as his brother. How old will John be when he is twice as old as his brother?
Two families go bowling. While they are bowling, they order a pizza for £12, six sodas for £1.25 each, and two large buckets of popcorn for £10.86. If they are going to split the bill between the families, how much does each family owe?
4, 9, 16, 25, 36, ?, 64. What number is missing from the sequence?
These are questions from online Intelligence Quotient or IQ tests. Tests that purport to measure your intelligence can be verbal, meaning written, or non-verbal, focusing on abstract reasoning independent of reading and writing skills. First created more than a century ago, the tests are still widely used today to measure an individual’s mental agility and ability.

Education systems use IQ tests to help identify children for special education and gifted education programmes and to offer extra support. Researchers across the social and hard sciences study IQ test results also looking at everything from their relation to genetics, socio-economic status, academic achievement, and race.

Online IQ “quizzes” purport to be able to tell you whether or not “you have what it takes to be a member of the world’s most prestigious high IQ society”.

If you want to boast about your high IQ, you should have been able to work out the answers to the questions. When John is 16 he’ll be twice as old as his brother. The two families who went bowling each owe £20.61. And 49 is the missing number in the sequence.

Despite the hype, the relevance, usefulness, and legitimacy of the IQ test is still hotly debated among educators, social scientists, and hard scientists. To understand why, it’s important to understand the history underpinning the birth, development, and expansion of the IQ test – a history that includes the use of IQ tests to further marginalise ethnic minorities and poor communities.

Testing times

In the early 1900s, dozens of intelligence tests were developed in Europe and America claiming to offer unbiased ways to measure a person’s cognitive ability. The first of these tests was developed by French psychologist Alfred Binet, who was commissioned by the French government to identify students who would face the most difficulty in school. The resulting 1905 Binet-Simon Scale became the basis for modern IQ testing. Ironically, Binet actually thought that IQ tests were inadequate measures for intelligence, pointing to the test’s inability to properly measure creativity or emotional intelligence.

At its conception, the IQ test provided a relatively quick and simple way to identify and sort individuals based on intelligence – which was and still is highly valued by society. In the US and elsewhere, institutions such as the military and police used IQ tests to screen potential applicants. They also implemented admission requirements based on the results.

The US Army Alpha and Beta Tests screened approximately 1.75m draftees in World War I in an attempt to evaluate the intellectual and emotional temperament of soldiers. Results were used to determine how capable a solider was of serving in the armed forces and identify which job classification or leadership position one was most suitable for. Starting in the early 1900s, the US education system also began using IQ tests to identify “gifted and talented” students, as well as those with special needs who required additional educational interventions and different academic environments.

Ironically, some districts in the US have recently employed a maximum IQ score for admission into the police force. The fear was that those who scored too highly would eventually find the work boring and leave – after significant time and resources had been put towards their training.

Alongside the widespread use of IQ tests in the 20th century was the argument that the level of a person’s intelligence was influenced by their biology. Ethnocentrics and eugenicists, who viewed intelligence and other social behaviours as being determined by biology and race, latched onto IQ tests. They held up the apparent gaps these tests illuminated between ethnic minorities and whites or between low- and high-income groups.

Some maintained that these test results provided further evidence that socioeconomic and racial groups were genetically different from each other and that systemic inequalities were partly a byproduct of evolutionary processes.

Going to extremes

The US Army Alpha and Beta test results garnered widespread publicity and were analysed by Carl Brigham, a Princeton University psychologist and early founder of psychometrics, in a 1922 book A Study of American Intelligence. Brigham applied meticulous statistical analyses to demonstrate that American intelligence was declining, claiming that increased immigration and racial integration were to blame. To address the issue, he called for social policies to restrict immigration and prohibit racial mixing.

A few years before, American psychologist and education researcher Lewis Terman had drawn connections between intellectual ability and race. In 1916, he wrote:
High-grade or border-line deficiency … is very, very common among Spanish-Indian and Mexican families of the Southwest and also among Negroes. Their dullness seems to be racial, or at least inherent in the family stocks from which they come … Children of this group should be segregated into separate classes … They cannot master abstractions but they can often be made into efficient workers … from a eugenic point of view they constitute a grave problem because of their unusually prolific breeding.
There has been considerable work from both hard and social scientists refuting arguments such as Brigham’s and Terman’s that racial differences in IQ scores are influenced by biology.

Critiques of such “hereditarian” hypotheses – arguments that genetics can powerfully explain human character traits and even human social and political problems – cite a lack of evidence and weak statistical analyses. This critique continues today, with many researchers resistant to and alarmed by research that is still being conducted on race and IQ.

But in their darkest moments, IQ tests became a powerful way to exclude and control marginalised communities using empirical and scientific language. Supporters of eugenic ideologies in the 1900s used IQ tests to identify “idiots”, “imbeciles”, and the “feebleminded”. These were people, eugenicists argued, who threatened to dilute the White Anglo-Saxon genetic stock of America.

A plaque in Virginia in memory to Carrie Buck, the first person to be sterilised under eugenics laws in the state. Jukie Bot/flickr.com, CC BY-NC

As a result of such eugenic arguments, many American citizens were later sterilised. In 1927, an infamous ruling by the US Supreme Court legalised forced sterilisation of citizens with developmental disabilities and the “feebleminded,” who were frequently identified by their low IQ scores. The ruling, known as Buck v Bell, resulted in over 65,000 coerced sterilisations of individuals thought to have low IQs. Those in the US who were forcibly sterilised in the aftermath of Buck v Bell were disproportionately poor or of colour.

Compulsory sterilisation in the US on the basis of IQ, criminality, or sexual deviance continued formally until the mid 1970s when organisations like the Southern Poverty Law Center began filing lawsuits on behalf of people who had been sterilised. In 2015, the US Senate voted to compensate living victims of government-sponsored sterilisation programmes.

IQ tests today

Debate over what it means to be “intelligent” and whether or not the IQ test is a robust tool of measurement continues to elicit strong and often opposing reactions today. Some researchers say that intelligence is a concept specific to a particular culture. They maintain that it appears differently depending on the context – in the same way that many cultural behaviours would. For example, burping may be seen as an indicator of enjoyment of a meal or a sign of praise for the host in some cultures and impolite in others.

What may be considered intelligent in one environment, therefore, might not in others. For example, knowledge about medicinal herbs is seen as a form of intelligence in certain communities within Africa, but does not correlate with high performance on traditional Western academic intelligence tests.

According to some researchers, the “cultural specificity” of intelligence makes IQ tests biased towards the environments in which they were developed – namely white, Western society. This makes them potentially problematic in culturally diverse settings. The application of the same test among different communities would fail to recognise the different cultural values that shape what each community values as intelligent behaviour.

Going even further, given the IQ test’s history of being used to further questionable and sometimes racially-motivated beliefs about what different groups of people are capable of, some researchers say such tests cannot objectively and equally measure an individual’s intelligence at all.

Used for good

At the same time, there are ongoing efforts to demonstrate how the IQ test can be used to help those very communities who have been most harmed by them in the past. In 2002, the execution across the US of criminally convicted individuals with intellectual disabilities, who are often assessed using IQ tests, was ruled unconstitutional. This has meant IQ tests have actually prevented individuals from facing “cruel and unusual punishment” in the US court of law.

In education, IQ tests may be a more objective way to identify children who could benefit from special education services. This includes programmes known as “gifted education” for students who have been identified as exceptionally or highly cognitively able. Ethnic minority children and those whose parents have a low income, are under-represented in gifted education.

There is ongoing debate about the use of IQ tests in schools. via shutterstock.com

The way children are chosen for these programmes means that Black and Hispanic students are often overlooked. Some US school districts employ admissions procedures for gifted education programmes that rely on teacher observations and referrals or require a family to sign their child up for an IQ test. But research suggests that teacher perceptions and expectations of a student, which can be preconceived, have an impact upon a child’s IQ scores, academic achievement, and attitudes and behaviour. This means that teacher’s perceptions can also have an impact on the likelihood of a child being referred for gifted or special education.

The universal screening of students for gifted education using IQ tests could help to identify children who otherwise would have gone unnoticed by parents and teachers. Research has found that those school districts which have implemented screening measures for all children using IQ tests have been able to identify more children from historically underrepresented groups to go into gifted education.
IQ tests could also help identify structural inequalities that have affected a child’s development.

These could include the impacts of environmental exposure to harmful substances such as lead and arsenic or the effects of malnutrition on brain health. All these have been shown to have an negative impact on an individual’s mental ability and to disproportionately affect low-income and ethnic minority communities.

Identifying these issues could then help those in charge of education and social policy to seek solutions. Specific interventions could be designed to help children who have been affected by these structural inequalities or exposed to harmful substances. In the long run, the effectiveness of these interventions could be monitored by comparing IQ tests administered to the same children before and after an intervention.

Some researchers have tried doing this. One US study in 1995 used IQ tests to look at the effectiveness of a particular type of training for managing Attention Deficit/Hyperactivity Disorder (ADHD), called neurofeedback training. This is a therapeutic process aimed at trying to help a person to self-regulate their brain function. Most commonly used with those who have some sort of identified brain imbalance, it has also been used to treat drug addiction, depression and ADHD. The researchers used IQ tests to find out whether the training was effective in improving the concentration and executive functioning of children with ADHD – and found that it was.

Since its invention, the IQ test has generated strong arguments in support of and against its use. Both sides are focused on the communities that have been negatively impacted in the past by the use of intelligence tests for eugenic purposes.

The ConversationThe use of IQ tests in a range of settings, and the continued disagreement over their validity and even morality, highlights not only the immense value society places on intelligence – but also our desire to understand and measure it.

Daphne Martschenko, PhD Candidate, University of Cambridge

This article was originally published on The Conversation. Read the original article.