Saturday, February 28, 2015

How Scrivener is Helping Me Rebuild My Dissertation

Image result for scrivener
eduhacker.net
by , Deep in the Burbs: http://www.deepintheburbs.com/how-scrivener-is-helping-me-rebuild-my-dissertation/
 
I made a bold step today and dismantled my dissertation. My advisor said that the current draft is like a jigsaw puzzle. I have most of the pieces, but they need to be rearranged.

I am a visual thinker, so the best way that I can think of dismantling and reshuffling a document is to use Scrivener.

That means I had to strip the beautifully formatted Word document in order to bring it into Scrivener (keep in mind, this is a fully-formatted document with front matter, footnotes linked to Endnote, bibliography, etc.

This process hurt, I’m not going to lie). Scrivener will help me rebuild the document, and it will be better because of it.

Here is the step-by-step process I took:
  • created a new Scrivener document.
  • removed all the annotations and comments from the Word document.
  • saved the Word document as an .rtf (because Scrivener works with .rtf documents).
  • imported the .rtf document into the “Research” folder in the Scrivener Binder. The beauty of this is that all the footnotes are preserved. However, the Endnote links are broken. I will eventually have to go back and rebuild those. It will be tedious, but not horrible.
  • highlighted each header of the document, right clicked the “Split with selected text as title” option to dissect the document into Scrivener cards, and arranged the cards into the outline hierarchy of the original Word document.
  • highlighted the entire outline in the Binder and changed the icon to a brown book.
  • duplicated the entire outline and changed the icon to a blue book. This allowed me to have a static reference of the original document–marked as a brown book - and movable pieces to place into the new draft - marked as a blue book.
  • created a new outline in the Draft Binder, based on the proper chapter structure.
  • moved the blue book pieces of the old draft into the new chapter structure.
Now I can see all the pieces from the previous draft in relation to the gaping holes in the new draft. When it is all done, then I can hit “Compile,” save it as a Word Document, and use the Luther Template to format it to specs again.

Screen Shot 2015-02-26 at 12.11.30 PM

A Learning Typology: 7 Ways We Come To Understand

learning-typologyby Stewart HaseHeutagogy of Community Practice, Teach Thought: http://www.teachthought.com/learning/a-learning-typology-7-ways-we-come-to-understand/

This typology is an attempt to redefine how we think of learning in the 21st century context.

Current definitions of learning focus on performance rather than holistic growth, and on what the learner can do after a learning experience. Gagne is perhaps the most notable exception.

General dictionary definitions of learning refer to learning as the acquisition of knowledge. (Also, see TeachThought’s 21st Century Learning Dictionary). The prevailing psychological definition is that learning is a change in behavior resulting from experience. Both these definitions seem inadequate given recent advances in neuroscience that shows us how complex a process is learning.

We are much more able now to examine directly how people learn rather than indirectly by studying what techniques work, which tends to be anecdotal and qualitative. It seems now to make more sense to design learning experiences around how learning takes place and blends with learner interest rather than to produce a particular outcome.

The typology described below concerns what is happening in the mind of the learner during the experience of learning. Using this as a base we are able to then jump into outcomes and the educational or learning experience itself.

Each type of learning implies that different learning experiences can be designed to either help people learn or are aimed at people already operating at that level.

The worthiness test is not meant to be applied. This is a typology not a taxonomy. For example, conditioned behaviors, habits and competencies are critical to survival and to the efficient use of resources. They don’t need to be seen as less vital or subservient to say adaptive learning, although they could perhaps be said to be more primitive.

A Learning Typology: 7 Ways We Come To Understand

1. Autopoietic and Adaptive

This involves what one could call deep learning. Complex connections are made between previous learning in the face of the need to adapt. Bifurcation enables shifts in perspective, the confident ability to attempt something new, to experiment.

Autopoiesis involves self-organization and adaptive behavior in highly complex and perhaps chaotic environment. Learning is applied in novel ways, reflexivity, double loop learning and triple loop learning are used as normal practice to evaluate behavior and outcomes that then facilitate more change in a continuous, adaptive, feedback loop. Knowledge becomes wisdom.

All learning involves pathways being established in the brain that are then retrieved in the form of memory. In adaptive learning, however, we see connections being made between different pathways that create new insights, new ways of seeing the world, new hypotheses to be tested. This is the world of creativity and innovation, and, ultimately, survival in the face of the need to adapt.

You can imagine this sort of learning occurring in the face of very complex problems or when survival is threatened. We are forced to look at the world in a different way, to challenge existing dogmas that clearly are not working.

Thus, motivation is high either by design or by rising stress. In the case of the former, one thinks of Edison’s laboratory in Menlo Park or perhaps Google and Apple, hot houses of innovation and creativity.

The latter may involve a more spontaneous rethinking of a theory, a new way of interpreting our experience (data or events), a new insight into a phenomenon, perhaps a reinventing of self.

nasagoddard-open-ended-learning-teachthough-fi 

2. Shifts in Cognitive Schema

Cognitive schema are our values, attitudes and beliefs, that are transcribed into thoughts and actions. In normal day-to-day life they are relatively resistant to change. They are learned early in life and drive much of our behavior. With strong links to the emotional parts of our brains cognitive schema will often override even very convincing evidence to the contrary.

So, a shift in our cognitive schema is a very high order learning experience. It often takes a very emotionally charged event, a powerful experience to change them. This was something that constructivism and one of its sequela, experiential learning, understood very well, as does much of psychotherapy.

As in adaptive learning, a new complex web of pathways is developed, and the old are broken down. So powerful is this shift from old to new that we may later never even recall having held a particular belief or attitude.

With a shift in cognitive schema comes a new set of behaviors. I may, for example, be involved in a cleverly designed experiential learning activity in a workshop that causes me to become aware that I have some very controlling behaviours as a leader. The insight is so powerful that I decide to combat this strong personality trait, delegate more and to trust others rather than micro-manage them.

3. Capability Development

Capable people (Cairns, Stephenson) are able to apply learning in novel situations as well as the familiar. They also have a high level of self-efficacy, collaborate well with others and have the capacity to learn.  

Here context is the key to new learning. Changing context provides an opportunity to experiment with our competencies and perhaps find novel and authentic ways of problem finding and solving.

To develop capability I am required to apply my competencies in a range of novel situations, to stay calm in the face of complexity, to think analytically about how to use my skills, to learn new skills, and to seek out a mentor or learn vicariously.

I am aware of the importance of relational learning, and of the potential of the learning commons. As I become more adept my self-efficacy increases and becomes more generalized.

4. Tacit Learning

High-level competencies are internalized so effectively that highly skilled tasks can be undertaken without observable/overt thought. Thoughts may be actualized through external questioning. Tacit learning is mostly seen in expert practitioners and occurs quite unconsciously.

I am placed in situations where my competencies are refined in the face of increasingly complex tasks.

5. Competence

Competencies consist of knowledge and skills. We acquire these through direct experience or vicariously, informally and in formal education. Most formal education is concerned with competence attainment and its reproduction.

In today’s networked world obtaining competencies is easier than ever before. We acquire competence through formal education experiences or, more likely, through informal process, when and if we require them.

6. Operant Conditioning

Unconscious, conditioned responses to stimuli in the environment, is also known as operant conditioning. This is a very common form of learning in formal and informal settings and is responsible for us learning many physical and social life skills essential for survival.

We perform a behavior and we are rewarded for it by some form of recognition, reward or positive outcome. The reward conditions the response and we are more likely to reproduce the behaviour in the future. The conditioning of more complex behaviors become habits and are unconsciously repeated.

This sort of learning (conditioning) can also be vicarious as we watch others experience positive outcomes when they do something.

7. Signal Learning

The simplest form of learning also known as classical conditioning. Again, this is a very common form of learning and is unconscious.

When I was a child my mother once gave me honey on bread when I was very sick with Scarlet Fever. It made me feel nauseous. Since then I have had a mild aversion to honey and never eat it. This was an unintended response (nausea) to the stimulus (honey).

Advertisers use classical conditioning techniques to make us buy things. An attractive person driving a particular car or using an appliance is used as a stimulus to elicit a response. A close friend who you always have a great time with when you go out wears a particular perfume. The smell of the perfume on other people makes you feel really good-without you being aware of it.

A Learning Typology: 7 Ways We Come To Understand; image attribution stewarthase and nasagoddard

Tuesday, February 24, 2015

Where Does A Freed Mind Go?

noneedtoargue-definition-of-academic-argumentby Terry Heick, Teach Thought: http://www.teachthought.com/culture/where-does-a-freed-mind-go/

I. Education is a system, but teaching and learning are not systems. This presents a challenge.

II. First, education. As a system, it is made up of parts, and these parts can be conceived in any number of ways. That is to say, they are subjective because we, as individuals are subjective.

III. We only become objective under strained scrutiny from others, and even then that objectivity is temporary. Once we move from an object of study to something familiar - from a being to a person - the objectivity is lost (to the biologist, the species becomes a primate becomes a monkey becomes a friend).

IV. It is through this loss that human connectivity is gained; it is through human connectivity that we then discover our own interdependence. That is, by how we connect with the people and spaces and ideas around us, we begin to make sense of ourselves. One changes the other.

V. Education, as a system, doesn’t have a way of responding to or planning for this nuanced and entirely human process. This leaves a key actuator of education - teachers - to “handle” that part. And when this doesn’t happen, the marrow of learning is gone. It is a shell (this is when academics move from a worthy body of knowledge worthy of study to a mechanical and thoughtless process that belies its own wisdom).

VI. Systems don’t plan for people. The language of systems is binary and mechanical; the language of people is musical and irrational. Education can’t communicate with teachers; teachers can’t communicate with curriculum; curriculum can’t speak to communities; communities can’t speak to families or students. Of these parts, only the teachers and families and students are real - capable of speaking and being spoken to. Of responding and creating and resisting and laughing and running amok.

VII. How we see ourselves changes how we view “reality,” and how what we believe about “reality” affects how we see ourselves. We construct and co-construct a reality that provides an always-on feedback loop where we constantly calibrate our sense of self, and based on what we “see,” either adapt that view of reality, or iterate our sense of self (consider how you think of yourself as an adult versus how you viewed yourself as a teenager; then think not only of that difference, but the events that caused that change - what we call “maturity” or “growing up”).

VIII. This is a ceaseless process that education, by design, seeks to interrupt because it never bothers to learn the language of the individual student - this child with this story sitting in this chair. Teachers are the great translators of learning - mediators that speak in binary code for the system, and in human tongue for the children. This both emphasizes and overburdens teachers.

IX. Secondarily, this reduces knowledge and wisdom to matters of performance, which is further reduced to letter grades and certificates. This sequence represents the complete dehumanization of the learning process, done so not out of malice, but by an entirely predictable pattern: systems-level thinking rather than personal-level affection. We continuously seek to make that which is subjective, objective.

X. In response, education technology has recently been turned to in hopes of easing this burden, but without clear and human and careful communication between teachers and curriculum and communities and families and students, “edtech” merely energizes the system itself, illuminating all of its parts in jagged and purple and thrumming arcs. And here, the best we can hope for is disruption.

XI. If we can accept knowledge, wisdom, literacy, and critical thinking as goals of education (if that continues to be our choice of, for lack of a better term, educating), we might study the characteristics of someone that excels in these four areas to see what it looks like. We might think backwards not from standards, but from people in their native and chosen places.

XII. Put another way, we could do worse than to begin with a question: If knowledge emancipates the mind, once freed, where does it go?

Where Does A Freed Mind Go?; image attribution flickr user thericks

Monday, February 23, 2015

BOOK REVIEW: "Feminism, Gender, and Universities: Politics, Passion and Pedagogies" by Miriam E. David

by Katherine Williams, Impact of Social Sciences: http://blogs.lse.ac.uk/impactofsocialsciences/2015/02/22/book-review-feminism-gender-and-universities-politics-passion-and-pedagogies-by-miriam-e-david/

Feminism, Gender, and Universities celebrates the way in which feminism has forever changed the terrain of higher education whilst examining the impact that the movement has had on the lives of women engaged in teaching others, writes Katherine Williams. 

This review originally appeared on LSE Review of Books.

Feminism, Gender, and Universities: Politics, Passion and Pedagogies. Miriam E. David. Ashgate. 2014.

With Feminism, Gender, and UniversitiesMiriam E. David, of the Institute of Education at the University of London, aims to demonstrate the positive impact that feminism has had on higher education.

This is deftly illustrated through the testimonies of women who engaged with feminist theory throughout their own university experiences. The book is the result of the political project of feminism, and how feminism became a worldwide social movement in the twentieth-century - now culturally embedded in academia, and society at large.

The book comprises eight principle chapters which span the emergence of academic feminism chronologically. Women are grouped according to their decade of birth in ‘cohorts’.

The study itself, according to the author, tries to be as inclusive as possible in terms of the global range of women invited to participate in the surveys that make up the vast wealth of the primary material used to collect the written testimonies of female academics.

However, the study includes women mainly from the ‘global north’, i.e., the USA, UK, Canada, France, Germany, India, Ireland, Israel, New Zealand, Australia, and South Africa. Despite the lack of insight from women from the developing countries, David hopes that the study will have implications for and impact upon the ‘global south’ (xii), contributing to the fight against gender-based violence in all its requisite forms.

David does not include any oral or written testimony by feminist men in academia in the text, but then, the project does not claim to speak for the feminist movement as a whole; it is but a contribution to the ever-changing landscape that is feminism, and feminist academia.

David’s involvement in grassroots projects designed to celebrate International Women’s Day, amongst many others, illustrates that her desire to capture women’s voices is not limited to the relative privilege of the ivory tower.

The basis of the collective biography, as described in ‘Second Wave Feminism Break on the Shore of Academe’ (pp. 95-123), is that participants in the project replied to questions designed to help them pinpoint when they became feminists.

In the case of the aforementioned chapter, this would also include details of how the emergence of second wave feminism influenced their learning, and life (p. 95).

The responses in this case are, of course, diverse. Some women ‘became feminists’ after leaving university; some felt early ripples of feminism before, and during, their university experiences; some women, simply put, did not want to end up stifled and unfulfilled, like their mothers: ‘I was born a feminist! I did NOT want to ‘end up like my mother’ - frustrated by being bright and unable to complete her schooling …’ (p. 110).
6950609933_7a992a0b5d_z 
Credit: Carlos Lowry CC BY-NC 2.0
Wider political projects became extremely influential to the women featured in the chapter; rising feminist ‘stars’ such as Germaine Greer, Kate Millett, and Sheila Rowbotham, and the emerging Women’s Liberation Movement (WLM), became pivotal to individuals’ feminist awakenings.

Participants’ personal and educational journeys were also marked by their readings of feminist, or women’s literature: ‘I became a feminist when I was about twelve when I read novels by a Punjabi writer called Nanak Singh. I was also influenced by Amrita Pritam, a Punjabi poet and novelist, and Waris Shah … they all critiqued women’s positions in Punjabi society’ (p. 97).

Ultimately, the women of this particular group didn’t come to feminism solely through their engagement with popular feminist texts, or their involvement with the WLM; it was the political mood of the time and each woman’s own personal circumstances that led them to ‘create feminist knowledge with a passion’ (p. 111).

It becomes obvious throughout the text that feminist values transcend generations. David discovered that despite the differences in ages, location, educational experiences, or socio-economic backgrounds, the women involved in the project all shared a strong commitment to social and gender justice (p. 173).

In ‘Academic Feminism Today: Towards a Feminized Future in Global Academe’ (p. 173-193), David discusses how the feminist mobilisation instigated in academia over the last fifty years can continue into the future. Whilst women have made many gains in academia, David contends that it is still a largely male-dominated environment; despite claims of gender equality within the academy, David claims that ‘gender equality in terms of numbers is indeed misogyny masquerading as metric’(p. 174).

Feminism in the academy, according to the author, has been ‘seduced’ by neo-liberal corporate models that are increasing competition between universities, making for tense environments for the teaching of feminist, or women’s studies.

‘Marilyn’ states ‘the neo-liberal and increasingly illiberal university is fundamentally anti-pathetical to feminism, and I think women’s studies as an academic discipline is in an increasingly uncomfortable tension between the ideals of feminism … and the demands of a neo-liberal institution’ (p. 175).

Whilst women scholars are arguably better represented in academia today than the past, women’s responsibility for the care of their children still makes a detrimental impact on career chances and promotion opportunities; academia seems geared towards androcentrism.

The cure for this ‘feminist melancholia’ (p. 179) is for the next generation of feminist academics to utilise the tools at their disposal to promote the feminist cause, namely, through social media outlets such as Twitter or Facebook, and good old fashioned awareness-raising.

Education trailblazers such as Malala Yousafzai, says David, perfectly illustrate the importance of narrative-building in education settings; the obvious goal being not only upward social mobility for women, but also addressing gender-based violence.

Feminism, Gender, and Universities celebrates the way in which feminism has forever changed the terrain of higher education whilst examining the impact that the movement has had on the lives of women engaged in teaching others.

The text is not only a celebration of women academics and their feminist ‘coming-of-age’, but offers the reader a critical dissection of what the author considers the corporatisation of the university landscape. The oral and written testimonies collected by Miriam E. David nod to the feminist tradition of listening to women’s voices, and giving their experiences a forum in which they can be heard.

Katherine Williams graduated from Swansea University in 2011 with a BA in German and Politics, and is currently studying for a MA in International Security and Development. Her academic interests include the de/construction of gender in IR, conflict-driven sexual violence, and memory and reconciliation politics. You can follow her on Twitter @polygluttonyRead more reviews by Katherine.

What Level of English Competence is Enough for Doctoral Students?

Image result for ielts
en.wikipedia.org
by Cally Guerin, Doctoral Writing: https://doctoralwriting.wordpress.com/2015/02/23/what-level-of-english-competence-is-enough-for-doctoral-students/

In recent weeks I’ve been involved in a number of different situations focused on assessing the English language skills of international students, which has made me think yet again about what is most important in this regard for those entering the world of doctoral writing.

An article in Times Higher Education served as a timely reminder that this continues to be a vexed issue at all levels of university study in this era of internationalisation. It is also useful to remember just how complex it is to accurately assess language levels, especially under high-stakes exam conditions.

In Australia, the International English Language Testing System (IELTS) is commonly used to determine English language competency. All four language skills (reading, writing, speaking and listening) are tested in four separate parts of the exam.

Another widely accepted language test, the Test of English as a Foreign Language (TOEFL), is conducted online and includes tasks that integrate writing, reading and listening.

At my current university, international students are required to have an entry level IELTS score of 6.5 or higher. This is equivalent to the 79-93 range in TOEFL. But what do these numbers mean? IELTS explains:

Band 7: Good user
Has operational command of the language, though with occasional inaccuracies, inappropriacies and misunderstandings in some situations. Generally handles complex language well and understands detailed reasoning.

Band 6: Competent user
Has generally effective command of the language despite some inaccuracies, inappropriacies and misunderstandings. Can use and understand fairly complex language, particularly in familiar situations.

According to this measurement, our students are usually somewhere in between (for more information, go to: http://www.ielts.org/institutions/test_format_and_results/ielts_band_scores.aspx).

This may sound adequate, but what 6.5 or 79-93 looks like in real life may seem like quite a lot of inaccuracies to some supervisors faced with drafts of their students’ writing. Maybe most sentences have small errors such as absent or misused articles (a, the), uncountable nouns used as plurals (researches or evidences), lack of agreement between subject and verb (participants has reported), or wrong word forms (have observe).

On the whole, I’m not too fussed about what I would regard as ‘surface errors’ like these. So long as the sentence structure is more or less in place, and the reader can understand what the student is getting at, I am more than willing to work with that. But I am aware that as a former English language teacher I bring particular skills to this task that others may not necessarily have.

Working as an editor for academics whose first language was not English also taught me useful lessons about writing and English competence. In that position my employing company policy stated that editors were not to intervene with ‘corrections’ unless there was actually a mistake - it was not regarded as appropriate to impose personal or stylistic preferences on others’ writing.

There is an important distinction between actual errors and personal preferences that is relevant to doctoral students too. When we urge doctoral writers to ‘find your own voice’, they may choose to include some stylistic quirks that are not strictly conventional in academic writing, yet communicate valuable aspects of their own perspective on the topic. Again, it’s necessary to consider whether or not it is ‘wrong’, or whether it might be quite acceptable to many academic readers.

I think it’s also very important to recognise what an author is achieving in their writing, rather than focusing on what is not grammatically accurate. For example, doctoral writers should be commended for successfully ensuring that all the relevant information is present and properly referenced; that the overall argument is structured into a logical sequence; and that the headings and paragraphing clearly communicate the central ideas.

Instead of noticing only what’s wrong with the writing, supervisors can encourage students by taking time to acknowledge what is right with it too. On the path towards developing writing skills, such positive feedback can be very heartening.

But lots of other supervisors are less comfortable - and much more impatient - with what I would regard as an acceptable level of English language competency.

So where do you draw the line regarding how much English is enough? Do you expect the first draft to be entirely free of any grammar errors? Do you find yourself reworking nearly every sentence so that in the end it feels as if you’ve written the entire thesis yourself? What would you like the English language entry level to be for doctoral candidates at your university?

I suspect that many of our readers have very strong opinions on this topic and it would be great to hear from you.

Friday, February 20, 2015

Applying for Postdocs: What are Your Tips?

Image result for postdoc research
research-in-germany.de
  • Get advice from your PhD supervisor
  • Start building your networks early
  • Finding funding
  • Be cautious about firing off out-of-the-blue emails
  • Look for opportunities outside your specialism
  • Look worldwide
  • Consider opportunities for a portfolio career
  • Try working as a researcher for a company
  • If you don’t meet the essential requirements, don’t apply
  • If there is a formal application process, read the guidance
  • Avoid excessive jargon
  • The cover letter should entice the recruiter to the CV
  • Always tailor your application
  • Put yourself in the principal investigator’s shoes
  • Show that you’re a team player
  • First impressions count
  • Talk about something other than your PhD
  • Make sure you are able to work well with your prospective boss
  • Think carefully about whether you want to stay in academia
What would you add? Are there any you disagree with? Are there some disciplinary specific aspects to this question which a general article overlooks? How do these issues differ internationally?

Explainer: How and Why is Research Assessed?

Photo: Shutterstock
by Derek R. Smith, University of Newcastle

Governments and taxpayers deserve to know that their money is being spent on something worthwhile to society.

Individuals and groups who are making the greatest contribution to science and to the community deserve to be recognised. For these reasons, all research has to be assessed.

Judging the importance of research is often done by looking at the number of citations a piece of research receives after it has been published.

Let’s say Researcher A figures out something important (such as how to cure a disease). He or she then publishes this information in a scientific journal, which Researcher B reads. Researcher B then does their own experiments and writes up the results in a scientific journal, which refers to the original work of Researcher A. Researcher B has now cited Researcher A.

Thousands of experiments are conducted around the world each year, but not all of the results are useful. In fact, a lot of scientific research that governments pay for is often ignored after it’s published. For example, of the 38 million scientific articles published between 1900 and 2005, half were not cited at all.

To ensure the research they are paying for is of use, governments need a way to decide which researchers and topics they should continue to support. Any system should be fair and, ideally, all researchers should be scored using the same measure.

This is why the field of bibliometrics has become so important in recent years. Bibliometric analysis helps governments to number and rank researchers, making them easier to compare.

Let’s say the disease that Researcher A studies is pretty common, such as cancer, which means that many people are looking at ways to cure it. In the mix now there would be Researchers C, D and E, all publishing their own work on cancer. Governments take notice if, for example, ten people cite the work of Researcher A and only two cite the work of Researcher C.

If everyone in the world who works in the same field as Researcher A gets their research cited on average (say) twice each time they publish, then the international citation benchmark for that topic (in bibliometrics) would be two. The work of Researcher A, with his or her citation rate of ten (five times higher than the world average), is now going to get noticed.

Excellence for Research in Australia

Bibliometric analysis and citation benchmarks form a key part of how research is assessed in Australia. The Excellence for Research in Australia (ERA) process evaluates the quality of research being undertaken at Australian universities against national and international benchmarks.

It is administered by the Australian Research Council (ARC) and helps the government decide what research is important and what should continue to receive support.

Although these are not the only components assessed in the ERA process, bibliometric data and citation analysis are still a big part of the performance scores that universities and institutions receive.

Many other countries apply formal research assessment systems to universities and have done so for many years. The United Kingdom, for example, operated a process known as the Research Assessment Exercise between 1986 and 2001. This was superseded by the Research Excellence Framework in 2014.

A bibliometrics-based performance model has also been employed in Norway since 2002. This model was first used to influence budget allocations in 2006, based on scientific publications from the previous year.

Although many articles don’t end up getting cited, this doesn’t always mean the research itself didn’t matter. Take, for example, the polio vaccine developed by Albert Sabin last century, which saves over 300,000 lives around the world each year.

Sabin and others published the main findings in 1960 in what has now become one of the most important scientific articles of all time. By the late 1980s, however, Sabin’s article had not even been cited 100 times.

On the other hand, we have Oliver Lowry, who in 1951 published an article describing a new method for measuring the amount of protein in solutions,. This has become the most highly cited article of all time (over 300,000 citations and counting). Even Lowry was surprised by its “success”, pointing out that he wasn’t really a genius and that this study was by no means his best work.

The history of research assessment

While some may regard the assessment of research as a modern phenomenon inspired by a new generation of faceless bean-counters, the concept has been around for centuries.

Sir Francis Galton, a celebrated geneticist and statistician, was probably the first well-known person to examine the performance of individual scientists, publishing a landmark book, English Men of Science, in the 1870s.

Galton’s work evidently inspired others, with an American book, American Men of Science, appearing in the early 1900s.

Productivity rates for scientists and academics (precursors to today’s performance benchmarks and KPIs) have also existed in one form or another for many years. One of the first performance “benchmarks” appeared in a 1940s book, The Academic Man, which described the output of American academics.

This book is probably most famous for coining the phrase “publish or perish” - the belief that an academic’s fate is doomed if they don’t get their research published. It’s a fate that bibliometric analysis and other citation benchmarks now reinforce.
The Conversation

This article was originally published on The Conversation. Read the original article.

Wednesday, February 18, 2015

Why Finland and Norway Still Shun University Tuition Fees: Even for International Students

hugovk, CC BY-NC-SA
by Jussi Välimaa, University of Jyväskylä

All the Nordic countries - Denmark, Finland, Iceland, Norway and Sweden - provide higher education free of charge for their own citizens and, until recently, international students have been able to study free too.

But in 2006, Denmark introduced tuition fees for international students coming from outside the European Union and European Economic Area. In 2011, Sweden followed suit. Now only Finland, Norway, Iceland and Germany do not collect tuition fees from international students.

Despite some moves to introduce fees, all these countries remain real exceptions in a world where international students are often a lucrative source of income for universities.

In Finland, the issue reared its head again last year when the government proposed that universities would be able to introduce fees for international students coming from outside the EU after 2016. After a lively public debate, in January the Finnish government decided not to go ahead with the proposals.

Researcher Leasa Weimer’s recent study concluded that the main actors opposing tuition fees were the powerful Finnish student organisations. They feared that collecting tuition fees from international students would open the gate to tuition fee reform for national students as well.

Those students, politicians and academics resisting tuition fees also said that a tuition-free system supports international social justice by giving students from developing countries an opportunity to participate in higher education.

They also argued that the introduction of tuition fees would undermine Finnish internationalisation efforts as it would be likely to lead to a significant decrease in the number of international students - as happened in Denmark and Sweden after the introduction of tuition fees there. In Sweden the drop was 80% during the two years following the introduction of fees.

New source of revenue

On the other side of the debate, the promoters of tuition fees - which include university managers, the ministry of education and business representatives - advocated a neo-liberal stance on education as a private good. They argued that competition for international students would enhance the quality of teaching and make Finnish universities more competitive in the international marketplace.

They also pointed out that it was unfair for Finnish taxpayers to pay for the education of international migrants’ coming to Finland where they also enjoy good social benefits. This argument has gained traction as a populist political view in Finland. Promoters also claimed that international students would be a new source of revenue for universities.

In November, Norway’s government backed down from a proposal to introduce fees. The main arguments against the reform were quite similar to those aired in Finland: student organisations, in particular, feared a “domino effect” by which tuition fees for international students would be the first step in introducing them for domestic students.

The rectors of Northern universities and university colleges - some of which are geographically remote - argued that they would lose many international students, especially Chinese and Russian students, if they started charging tuition fees.

According to Agnete Vabo at the Norwegian Institute for Studies in Higher Education and Research, the leaders of the most prestigious universities in Norway also argued that tuition fees would mean a great loss in terms of maintaining the diversity and quality of the international student population. In a globalised world this would be very problematic.

Equality key in Nordic model

We know that education is expensive everywhere - including in Nordic countries - and that someone has to pay for it. The crucial question is who. But to answer this, it is important to pay attention to the differences between the societal goals and social dynamics of higher education in Nordic countries and countries which charge university tuition fees, such as the UK, US or Australia.

The Nordic higher education systems are almost entirely publicly funded. According to OECD Education at a Glance 2014 the proportion of public funding varies between just under 90% in Sweden and 96% in Norway and Finland, whereas in England only 30% of the costs of higher education are paid by the public purse.

CC BY

All Nordic countries also have a strong tradition of equality, which in education translates into offering equal educational opportunities for all citizens. The Nordic countries have policies to encourage gender equality and to support students from lower socio-economic groups to enter universities.

As a result, there is greater equality of educational opportunity in Nordic countries. Finnish students whose parents went to university are only 1.4 times as likely to participate in tertiary education as their peers whose parents did not got to university, according to the OECD.

In Sweden, a young person with university-educated parents is 2.3 times more likely to go to university themselves, while in the UK they are six times more likely.

Yet perhaps the most important difference between the Nordic countries and countries such as the UK is the ethos of education as a civil right and a public service rather than a commodity. Degrees are not seen as commodities to be exchanged in the marketplace.

As the cases of Sweden and Denmark show, the neo-liberal argument for education is not unknown in Nordic countries. But a strong counterargument is rooted in the values of Nordic welfare societies which see higher education primarily as an equality issue.

A high level of education is beneficial for the development of society including business and industry, making it a collective economic issue. With this argument, education is defined neither as a private investment nor a commodity, but a civil right. So, individual human beings should not have to pay for it.

Next read: How Germany managed to abolish university tuition fees
The Conversation

This article was originally published on The Conversation. Read the original article.

The Post PhD Blues


When Brian sent me this post I could instantly relate. In fact, this blog is the outcome of my own PhD blues where I needed something meaningful, creative and interesting back in my life. I know many people who have finished and express similar sentiments. Here are Brian’s thoughts.

One of the posts that caught my eye recently commented on the career prospects for the newly-qualified PhD, especially outside academia. Getting a job in the first place - especially in today’s economic climate - is naturally of concern.

But the post-study period can be an unsettling time for a number of other reasons involving a range of emotions, which I’ll refer to collectively as the “post PhD blues”.

I’m in a different situation than most, in that the job I’m doing now is the same as before I started my thesis. In December 2008, I started working on an Engineering Doctorate (EngD) alongside my “day-job”. 1,731 days later I submitted my thesis for examination, and was immensely proud to graduate as Doctor of Engineering last June.

I had always harboured an ambition to do a PhD, but it seemed unlikely that a suitable opportunity would ever arise. Entrance to post-graduate education is increasingly competitive and expensive, and is practically inaccessible to those without some form of 3rd-party backing. One would have to be highly motivated and determined (or wealthy enough) to make the attempt otherwise.

To someone like me, having already established a career, the chances of becoming a mature student seemed a pipedream. Naturally, I jumped at the chance when our universities liaison manager asked if I wanted to do an EngD.

An EngD is a PhD-equivalent qualification combining technical research and study with an MBA component. Without any further prompting I came up with a project that interested me, and which was subsequently accepted by management and the university. I was in.

The four-and-a-half years or so I spent grafting away at my studies were an extraordinarily intense experience; tremendously hard work, of course, stimulating, frustrating, depressing and exhilarating in equal measure, but ultimately personally very rewarding. Passing the viva so convincingly was truly a high point. I felt on top of the world.

A PhD represents a pinnacle of learning, a measure of achievement to which considerable amounts of time and effort, as well as emotional commitment, have been devoted.

Who hasn’t suffered pangs of uncertainty over whether a line of research will be successful, or merely end up as a waste of time? More worryingly, will your efforts be good enough to convince the examiners that you are worthy of a doctorate? To put it bluntly, a PhD is b****y hard work and exacts a great toll on one’s character to see it through to the end.

A doctorate provides status in a society that values success. No wonder the sense of triumph at the end can be so potent, and the glow of personal pride so strong.

I have to admit being disappointed in the glow of my viva success not to have received greater recognition from my employers. But, no matter how elated I was feeling personally, reality had to kick in at some point.

There are plenty of PhD-level engineers working in the company, so one more wasn’t going to make much of a difference to its prospects. There’s also plenty of R&D going on elsewhere in other departments. My research interests had simply to compete for attention amongst all other claims for development funding.

The first of my “post-PhD blues” is that not everyone will share your excitement at getting a PhD, or will necessarily see the same value in your research as you do. Those close to you will of course be pleased and share in your delight, but the wider world isn’t necessarily going to be bowled over by your accomplishment. In short, your hard-won sense of achievement is likely to be deflated sooner or later.

Post-PhD Blue #2 concerns the process of getting back to ordinary life after completing the PhD. Suddenly, there’s the “what-on-earth-do-I-do-now-in-the-evenings-and-at-weekends” syndrome to cope with.

For three or more years you were effectively your own boss managing your thesis from inception to completion, while having to satisfy the “must-have-it-now” demands of supervisors, university departments and sponsors alike.

Whatever else you’ve had to cope with, you’ve spent long hours chasing references, and agonised over the wording of every paragraph. You’ve burned copious amounts of midnight oil, and had critical ideas at the most unlikely hours.

After living the “PhD-lifestyle” for so long you’ve forgotten what it is like to live an ordinary 9-to-5 existence. Instead of those heady days obsessed with papers, presentations and conferences there’s now the tedium of the weekly timesheet and management priorities to cope with.

You might have hated it at the time, but you’ll gradually realise that that period in your life when you stretched your brain on the rack was a veritable paradise compared with the daily humdrum of the profit motive.

My final “post-PhD blue” is that a PhD isn’t an automatic ticket to a better life. You might expect that the doors to promotion and a higher salary would open automatically, or that there would be a sure-fire guarantee of a place on the interview shortlist. Unfortunately, life isn’t quite that easy.

For one thing, you’ll likely as not be over-qualified for a large number of jobs on offer. Moreover, experience and industry-specific knowledge will often rank as high for the prospective employer as do theoretical skills and academic attainment: lack of the necessary experience can militate against the short list, no matter good you are academically.

As ever, it is also still as much “who-you-know” as “what-you-know” that gets you in line for the job you want. Networking skills are still important for the post-doc, even for preferment within a company.

You might not experience any of the above and adjust to post-PhD life without any difficulty. Others might not be so fortunate. We should, of course, aim to get the best out of our hard-worn qualification whatever our circumstances. However, my experience is that a PhD/EngD is ultimately about personal fulfilment and satisfaction. Anything else is a bonus.

What do you think? Have you suffered the PhD blues? Or do you have plans on how to avoid it? Love to hear your thoughts in the comments.

Tuesday, February 17, 2015

Talking to Grandma Isn’t Social Science

Emile Durkheim
Emile Durkheim (Photo: Wikipedia)
by : https://theresearchwhisperer.wordpress.com/2015/02/17/reclaiming-social-science/ 

Yolande Strengers is a social scientist, Senior Lecturer and ARC DECRA Fellow in the Centre for Urban Research, School of Global Urban and Social Studies, RMIT University.

Her recently published monograph is titled ‘Smart energy technologies in everyday life’ (Palgrave Macmillan 2013).

Among other things, she’s interested in smart energy technologies and how they’re changing how we live. She tweets at @yolandestreng.

On a bad day, I feel like the social sciences are under siege. Anyone, it would seem, can do social research.

And anyone can make claims about the social world and human condition.

But on what theories and methodologies are these claims founded? What are the consequences for society when everyone is a social expert?

There is nothing wrong with having an opinion, but when opinion holds equal weight to rigorous social science research, or when opinions and dominant paradigms about human action underpin that research, we have a serious problem. Actually, we have several.

In this post, I consider where the problems lie, and how social scientists can begin to reclaim their turf.

Talking human

Aside from the weather, one thing we all love to talk about is ourselves. Most of us are full of claims about people: why we will or won’t change, what’s wrong with society, or what needs to be done to improve the human condition. There’s nothing wrong with that.

Indeed, a key success of the social sciences is their accessibility: simply by virtue of being human, everyone can speak our language.

But this comes with its problems too. Tom Nichols recently wrote about the ‘death of expertise’ in the social sciences: as people have experienced increasing access to their social networks and a wide diversity of google-driven, one-click social ‘research’, the social sciences have lost some of their credibility. Everyone feels they’re an expert.

This issue extends to professional settings too, where science and engineering disciplines are increasingly legitimised to speak human.

Zoe Sofoulis commented on this issue several years ago in her project on cross-collaboration between urban water managers and Humanities, Arts and Social Science (HASS) researchers. HASS researchers, she argued, ‘are not legitimised to “speak science” whereas scientists and engineers - being humans themselves - can “speak human” whenever they wish without obligation to refer to any specialist expertise about people, culture and society.’

The challenge comes when we assume that understanding people is easy and unproblematic. Bridge-building and surgery require accepted forms of engineering and medical expertise, but anyone can do a survey and make claims from it.

When findings from a DIY survey are held up as being of equal weight to rigorous qualitative or quantitative fieldwork, we’re all in trouble. Just as the foundations of any bridge I try to build would likely crumble, so too will the foundation of our society, if we base our policies, decisions, and programs on dodgy social research.

Talking about grandma

A related, and perhaps more worrying, problem is the absence of any social research at all. When I talk to people about my research, they naturally want to relate it to themselves and their own experiences. This is all well and good. Unless, of course, they begin to discredit or dismiss actual social research using anecdotes from their own life experience.

I first encountered this ‘grandma phenomena’ in one of my PhD interviews with an electricity utility engineer. I was asking about householders’ increasing reliance on air-conditioning, whether it was necessary, and what utilities could do about it. He started talking about his grandma. She simply wouldn’t be able to cope without air-conditioning, he explained, and so there wasn’t a lot that could be done. End of story.

The point is not to dismiss grandmas (bless their ironed hankies), their enormous wealth of knowledge, or their legitimate vulnerabilities to heat. But it’s troubling when anecdotal stories about relatives get used by people who make decisions to validate or dismiss certain courses of action for large swathes of the population.

Talking numbers

After grandmas, there is a disturbingly large jump in the next commonly assumed form of ‘valid’ social data. We move from the small scale (grandmas) to the large (statistical or big surveys).

The reliance on large-scale representative and statistical pieces of social research reflects our long-running preoccupation with large numbers and Big Data (a social phenomenon in itself). What we often forget is that large numbers come with compromises: corners must be cut.

We normally can’t sit down and ask 10,000 or even 500 people about every detail of some aspect of their lives; so, we have to provide ‘options’ and ‘choices’ in surveys. These are usually predicated on dominant theories and assumptions about people. By asking questions premised on theories and assumptions, we create and reproduce the reality on which these are founded. Profound indeed.

There’s nothing wrong with large numbers per se - all social research has its uses - but when the only form of social data we take seriously has a number attached (or a grandma story) we ignore other forms of legitimate and valuable social expertise.

In-depth research, which is what I specialise in, is open-ended and participant-directed. It can reveal new insights and take you in surprising directions. It can challenge existing assumptions and reveal new ones. It can manifest and propose new realities.

My point isn’t to paint large-scale research as ‘bad’ and in-depth studies as ‘good’. As I’ve said earlier, all social research can be valuable. But if qualitative research isn’t accepted as ‘valid’ evidence in policy circles, then only very specific points and voices are being heard.

Talking theory

Oh yes, the dreaded T-word. The thing about theory is that it’s everywhere, whether we like it or not. When we talk about ourselves in terms of our actions, values, beliefs, behaviours or attitudes, we are understanding ourselves and how we change through a certain theoretical perspective. We are, in effect, talking theory.

Pervasive theories dominate our understanding of the social world, and are often built into DIY and large-scale research without much acknowledgement. Elizabeth Shove has termed these dominant theories the ‘ABC’ (Attitudes, Behaviour, Choice) paradigm [1]. ABC language permeates policy and change programs, to the point where it sometimes seems impossible to see alternatives.

The social sciences, however, have a plethora of models, theories, and paradigms for understanding social action and change. They all lead us in different directions, and prioritise different solutions and strategies.

Reclaiming the social sciences

How, then, do social scientists confront these challenges?  How do we distinguish ourselves and our research from the continual swathe of opinion? How do we resist what Ian Hacking refers to as the ‘avalanche of printed numbers’ [2], or carry out research that extends beyond dominant paradigms of people and societies?

The first thing we need to do is be more rigorous: to not let opinion seep in, and to be vigilant about our own assumptions and ‘pet theories’ of human action and social change.

Outside academia, social science faces a different set of challenges characterised by increasingly ‘bite-sized’ and piecemeal pieces of commissioned research, and diminishing funding available for substantive projects. It is a challenging time, but one that needs the full integrity of the social sciences to forge new pathways and possibilities.

Social scientists can speak more than human

If this all sounds like a call for social scientists to remain seated in their ivory towers and stop playing with other disciplines, it’s not. Social scientists can, and need to, collaborate with other disciplines and professions, but the terms on which this is done call for a similar level of rigour.

There is a tendency to ‘blackbox’ social scientists into the ‘human side’ of interdisciplinary projects. As a social scholar of technologies and infrastructures, this is a frustrating box to sit in.

Pioneering projects transcend disciplinary boundaries by informing the design and development of technical projects right from the beginning. For example, they question how the design of an electricity grid (or the nature of energy reforms) connects to how we use our washing machine or run the air-conditioner.

I can hear the sceptics now: ‘but isn’t she just giving us her opinion?’ Well, yes. ‘Aren’t these claims based on anecdotal stories and personal experience?’ Well, sort of. ‘Isn’t that a bit hypocritical?’ Well, maybe. ‘How can all this possibly be achieved?’ I’m still trying to figure that out.

Still, I am a social scientist and that means I speak from a position of particular expertise. I am trying to embed these concerns in my own research and, hopefully, that counts for something. Just not in the big, statistical way.

References

[1] Shove, E 2010, ‘Beyond the ABC: climate change policy and theories of social change’, Environment and Planning A, vol. 42, pp. 1273-85.

[2] Hacking, I 1982, ‘Biopower and the avalanche of printed numbers’, Humanities in Society, vol. 5, pp. 279-95.

Thesis Know-How: Beware the Quote Dump

Image result for thesis writing
writeawriting.com
by Pat Thomson, Patter: http://patthomson.net/2015/02/16/thesis-know-how-beware-the-quote-dump/

I very often see first drafts of theses - and sometimes completed ones - which suffer from quote dumping.

A quote dump is when the writer inserts a very large extract of someone else’s words into a text and then does nothing with it. The quote sits there, highly visible in its indented and italicised state, inert, unyielding, impenetrable.

The quote dump often occurs in literature chapters and/or when the thesis writer is discussing theoretical literatures. It’s sometimes used when people are explaining their methodology. It can happen when people genuinely attempt to engage with other people’s words and ideas and either challenge them, evaluate them or make them into foundations for their own research.

While quote dumping might have been the way to get good marks in essays in undergraduate and Masters work, it is a learned strategy that doesn’t fly so well in a doctoral thesis.

Yes, the thesis reader wants to know what the thesis writer understands about what they have read, but they want to know as well how the writer interprets and evaluates this material, not merely whether they are capable of finding and selecting a quotation.

Thesis readers also want to know what the thesis writer intended them to think about a quotation - is it a key point and if so how? What is so important about these words that they must be separated out from the rest of the text and given a prominent position? How does this sentence or five advance the argument being made?

Using quotations is, of course, a perfectly OK thing to do - I’m not suggesting a ban on quotations, rather a more thoughtful use of them. And one or two quotations without any commentary from the thesis writer can be overlooked.

But when a thesis reader finds serial quote dumping - a kind of textual fly-tipping on page after page - then they really do start to worry. Is the writer dumping quotations one after the other because they can’t actually understand the ideas properly, and the quotes stand in for a lack of real comprehension? Or are they afraid to speak out, and are hiding behind the words of important others because they just don’t think that their interpretation will stand up to scrutiny?

Is this quote dumping a kind of ventriloquizing act where the thesis writer has their hand metaphorically up the back of the people that they think need to be included? Can the thesis writer not write the ideas in their own words?

Quotations, no more than data, do not speak for themselves. The thesis writer needs to provide some clues to the reader about what they are to make of the quotation they are encountering. Sparely used quotations must be introduced in some way, and the reader given some guidance about what they are to make of them.

The quotation needs to flow into the following sentences which in turn amplify and carry forward the idea that the quotation represents. Incorporating quotations into the flow of argument, through appropriate commentary, means that the thesis reader does not feel they have fallen over an obstacle every time they encounter an indented or “…” quotation.

They don’t feel that the thesis writer has just dumped the quote because they can’t, or perhaps can’t be bothered, to make clear what the point is, while also making a smooth reading experience.

It is thus very important for thesis writers to think very carefully about when, where and why they use big slabs of quotation. The solution to quote dumping is for thesis writers to be judicious in the number of quotations that are used - and then use the lesser number to effect.

A quotation may well be appropriate when the thesis writer wants to show how an idea was elegantly and eloquently put, how a particular idea can be delineated, or how or term is defined, or when a surprising metaphor, an apt analogy or a creative association is made. Quote sparely and to effect!

Saturday, February 14, 2015

What To Do With ‘Leftover’ Data?

Raw simulated data for contextual seriation
Raw simulated data for contextual seriation (Wikipedia)
by Cally Guerin, Doctoral Writing: https://doctoralwriting.wordpress.com/2015/02/07/what-to-do-with-leftover-data/

On winding up a research project recently, I got to thinking about the ideas and data points that didn’t make it into the final publications or conference presentations.

After collecting survey responses and focus group transcripts, the research team looked over the findings and decided how to divide it up into publishable chunks.

Then for each paper we took the data that was relevant to that topic, analysed it thoroughly, and then decided what the main argument could be - that is, what is the new knowledge gained from that part of the research?

But there are still a few intriguing bits and pieces of data left over. The process brought home to me how often doctoral writers are faced with ideas and data that don’t quite fit into the scope of the doctorate. To avoid feeling that work is ‘wasted,’ it is helpful to think about how those leftovers might be used.

In doctoral research projects, as with any quality research, ensuring that the work has followed rigorous research methods and thus produced reliable data helps us answer the research question we set out to address in the first place.

When it comes to doctoral writing, the integrity of the methods and the interpretation of the resulting data are presented in the thesis, keeping a close eye on examiner expectations. Obviously, this is central to how a doctoral study proves itself worthy of a PhD.

Elsewhere in the DoctoralWriting blog I have written about the importance of constructing a thesis that is not simply an exact record of all the components of the research journey - decisions need to be made about what needs to be left out of the thesis, just as much as what needs to be included.

Having constructed the thesis version of the research for doctoral examination, there are likely to be bits and pieces of data that don’t find their way into the main argument, that seem to be left over at the end of the project. Sometimes, though, these leftover items of data stay in the researcher’s mind, hinting that there is more to be said about the topic, niggling away in the background and refusing to be put aside.

I firmly believe that there is a place for the intuitive hunch in research, the idea that attracts attention even when it is not fully worked out, the idea that seems to be left over from the main project. I have taken heart from the work of Maggie McLure in relation to this.

She writes about data that ‘glows’, by which she means ‘some detail - a field-note fragment or video-image - [that] starts to glimmer, gathering our attention’ (McLure 2010: 282). McLure provides us with an example of how she works with such data in ‘The Wonder of Data’ (2013).

These glowing data points tell us something interesting, but maybe it’s not always a really big idea or argument that is to be made, at least in relation to the current research project. Or perhaps the glowing data stands out from what’s already been said, not contradicting the main argument (of course!), but moving off on another tangent.

It might be something really interesting even though it does not fit logically alongside the central point of the thesis chapters or articles that finally make it to the light of day.

I’ve starting thinking that perhaps an important role for research blogs is to provide a place to explore leftover data that may or may not turn out to be big ideas. When writing for the DoctoralWriting blog, I often find myself exploring ideas that start out tiny, and maybe grow into a blog, and occasionally continue to blossom into a full-sized research project.

For doctoral candidates, publishing one’s work through blogs is not always straight forward and should be approached cautiously. But perhaps a similar process of writing up short pieces that may later be revisited can be a useful practice. This procedure has the advantage of not throwing away the data that doesn’t make it into the final thesis, and of encouraging ongoing writing habits.

I’m not sure that the leftover data is a concern for all doctoral researchers, though. Perhaps the possibility of confronting leftover data is more common in qualitative research, for example, where interview participants might expand on related ideas that are not quite directly on the main topic of the formal interview questions.

In workshops I remind students that nothing is ever wasted in the work they do towards their research, and suggest that it’s good to keep any extra ideas in a separate file if they don’t seem to fit into the main thesis. I too have got one of those files and it seems to keep growing, but at the moment I’m not quite sure what I can do with it.

I’d be interested to hear from readers who have made good use of leftover data from their research, especially those who undertake quantitative studies (can ‘outliers’ be informative in unexpected but useful ways?). I suspect that the leftover data that glows can inspire us towards new research directions, whatever our methods.