Monday, 27 May 2013
Beyond Open Resource Access: An Integrative Approach to Optimal Knowledge Networking
Please go to:
http://ijmk.cgpublisher.com/product/pub.257/prod.5
Manu Rajan
National Centre for Science Information
Indian Institute of Science
Bangalore 560 012
email: manu.rajan134@gmail.com
Tuesday, 14 May 2013
Performance Measurement of Indian Institute of Science : A Proposal
A NOTE ON “PERFORMANCE MEASUREMENT IN UNIVERSITIES”
Manu
Rajan
National
Centre for Science Information.
(25
July 2005)
This paper points out the need for
universities to demonstrate public accountability and the consequent need for
better management information for higher education administrators. The concept
of ‘performance indicators’ is introduced. Methodologies for institutional
interdepartmental comparisons as well as comparisons with identical departments
at other peer institutions are reviewed. A few national higher education
performance models in other countries, as well as institution specific
strategies used by universities in these countries, are also examined. Measures
to be taken by the Indian Institute of Science for better management information
to meet accountability requirements and better management of resources are
outlined. The paper concludes with a note on the changes affecting the
university and the effect on productivity measurement and development of
performance indicators.
1. Introduction:
The following sections detail the different
ways in which universities and nations attempt to demonstrate public
accountability and improved resource management in higher education through
better management information. Section 2 introduces the concept of ‘performance
indicators’, chalks out a history of performance measurement in universities
and it’s shifting focus, and brings forth the main elements of the Delaware
Study that deals with instructional costs and faculty productivity. This
includes the construction of a ‘quantitative’ framework to assess teaching
productivity and to tie it to academic budget and resource planning as also a
checklist of ‘qualitative’ measures of faculty activity. It also includes ways
in which departments at one institution could be compared with identical
departments at other peer institutions through the use of the ‘benchmarking’
strategy. This section also includes a survey of national performance models in
other countries as well as institution specific strategies. Section 3 deals
with ways in which ‘quantitative’ benchmarking data can be used. Section 4
recognizes the fact that context is a crucial component in the examination of
any quantitative information: understanding the qualitative dimension of
teaching, research and service is important for assessing productivity. This
section deals with establishing qualitative benchmarks in individual
departments. Section 5 outlines the measures to be taken by the Indian
Institute of Science for better management information to meet accountability
and resource management requirements. The concluding part, Section 6, deals
with the changes affecting the university system and the effect on productivity
measurement and development of performance indicators.
2. A survey of current attempts at
measuring performance in universities worldwide:
The Joint Commission on Accountability
Reporting (JCAR) of the USA has attempted to describe what colleges do- including
faculty activity - in terms of measurable institutional outputs. It attempts to
talk in terms of productivity (student outcomes) rather than focus solely on
input measures. But JCAR conventions tie faculty activity in general and
instructional activity in particular to student outcomes alone and do not
consider outcomes of research and public service activities that faculty are
engaged in. Many laboratory and
discussion sections that are required components of a course along with the
credit bearing lecture section are common features of course offerings. And
because they do not generate student credit hours, they are omitted from the
JCAR ‘student credit hour analysis of instructional activity’. Later attempts
have attempted to build a quantitative framework that (1) provides the
information and data that deans and department chairs require to effectively
manage instructional, personnel and fiscal resources and (2) addresses the
apparent gaps in the JCAR method in a fashion that allows the information to be
used not only for internal management purposes but to portray a more complete
picture of faculty productivity to those outside higher education. The
University of Delaware has carried out a study, now known as the Delaware study
that focuses on instructional costs and faculty productivity at the academic
discipline level of analysis. Other institutions have established measures that
describe not only how much activity faculty are engaged in but how well faculty
perform in those activities. Attempts are being made to effectively combine
quantitative and qualitative data into a single reporting package. The new
framework stipulates that faculty productivity in particular may be measured on
6 dimensions – research quantity (R-QN), research quality (R-QL), teaching
quantity (T-QN), teaching quality (T-QL), service quantity (S-QN) , service
quality (S-QL). Assessment on all these 6 areas can be at the national,
university, institutional, departmental and individual faculty levels. A checklist of ‘qualitative’ measures of
faculty activity is given below:
. Number of refereed publications within
past 36 months
. Number of textbooks, reference books,
novels or volumes of collected works within past 36 months
. Number of edited volumes within past 36
months
. Number of juried shows or performances
within past 36 months
. Number of editorial positions held within
past 36 months
. Number of externally funded contracts and
grants received within past 36 months
. Number of professional conference papers
and presentations within past 36 months
. Number of non-refereed publications
within past 36 months
. Number of active memberships in
professional associations and/or honor societies within past 36 months
. Number of faculty engaged in faculty
development or curriculum development activity as part of their assigned
workload
.Five-year undergraduate persistence and
graduation rates for most recent cohort
. Most recent average student satisfaction
scores for
. Quality of faculty academic advisement
. Out-of-class availability of faculty
. Overall quality of interaction with faculty
. Proportion of most recent graduating
class finding curriculum-related employment within 12 months of commencement
. Proportion of students passing licensing,
certification, or accreditation examinations related to academic major
. Proportion of most recent graduating
class continuing to pursue further graduate or professional education
. Number of students engaged in
undergraduate research with faculty mentor within past 12 months
. Number of students engaged in internships
or practica under direct supervision of faculty over past 12 months
. Number of students who author or coauthor
with a faculty mentor an article or chapter over past 36 months
. Number of students presenting or
co-presenting with a faculty mentor a paper at a professional meeting
(Note: Not all of the variables enumerated
above are appropriate for each and every department or program at an
institution. These variables are attractive in that they not only describe
measurable outputs from faculty activity but reveal information about the
quality of those activities: a qualitative filter is being applied to the
output number being reported. It is to be noted that faculty output also
consists of presenting papers at meetings, writing white papers and providing
public service. In addition, faculty spend extraordinary amounts of time
developing curriculum materials and teaching strategies and engaging in other
faculty development activities: faculty are expected to modernize teaching
techniques to take advantage of current technology. That technology allows
virtually asynchronous learning through the use of Internet-based teaching
modules, twenty-four-hour e-mail communication with students and creation of
learning assessment tools to measure the impact of technology on the quantity
and quality of what is being learned. A paradigm shift is occurring whereby
emphasis in developing curricular materials is to focus on learning as opposed
to teaching. Any serious examination of the qualitative dimension of faculty
productivity must acknowledge that faculty are increasingly being required to
spend time coming to terms with and internalizing these teaching-learning
paradigm shifts. The variables above take into consideration these aspects as
well.)
The construction of a quantitative
framework helps in getting an assessment of teaching productivity and to tie it
in some meaningful way to academic budget and resource planning. The framework
helps in adopting a reporting structure that provides a productivity-cost
profile for each department or program at an institution. The profile brings
together traditional and non-traditional measures of productivity and
effectively links them with expenditure data. As colleges and universities
attempt to encourage more interdisciplinary study and interdepartmental
cooperation, it is imperative that workload be apportioned in a fashion
consistent with fiscal resource allocation. The productivity-cost profile may
be made up of 2 tables: one detailing the ‘Teaching workload data’ and the
other the ‘Fiscal data’. The teaching workload data may include data on number
of FTE (full- time equivalent) graduates, degrees granted, student credit hours
taught, % credit hours taught by tenured faculty, % credit hours taught by
other faculty, FTE students taught, FTE faculty and finally workload ratios
such as student credit hours/FTE faculty and FTE students taught/ FTE faculty.
(The concept of ‘full-time equivalency’ is new and takes into consideration the
nuances related to teaching load). The ‘fiscal data’ table may contain data on
total sponsored research/service, sponsored funds/FTE faculty on appointment,
direct instructional expenditures, direct expense/student credit hours, direct
expense/FTE students taught, earned income from instruction and earned
income/direct instructional expense. The purpose of ratios of this sort is not
to cast a department within the context of ‘empirical absolutes’ but rather to
be used as tools of inquiry for framing policy questions such as:
. If teaching load ratios (student credit
hours and FTE students taught per FTE faculty) are low, are research and
service ratios (direct expenditures per FTE faculty on appointment)
sufficiently high to provide additional contextual information as to how
faculty are productively spending their time?
. If research and service expenditure
ratios are declining over time, are teaching workloads increasing as an offset?
. If teaching load ratios are declining
over time and instructional expenditure ratios are increasing, are there
qualitative issues that can explain these trends (for example, smaller class
size, additional faculty, shift in curricular emphases)?
It is important to examine data on a
trend-line basis: any single year of data can be idiosyncratic. The data should
be viewed over a trend line as quantitative barometers for framing larger
policy questions as to how faculty in the unit are spending their time, whether
they have achieved a balance between teaching, research and service that is
appropriate to the mission of that department, and whether they are, in fact,
as productive as they can be.
Some universities have adopted the
‘Balanced Scorecard’ approach in choosing their suite of performance
indicators. These include the University of Edinburgh, the Open University of
the UK, Glasgow Caledonian University, Napier University, University of
California and Ohio State University. Developed by Prof. Robert S. Kaplan and
Dr. David P. Norton at the Harvard Business School, the Balanced Scorecard was
designed to improve current performance measurement systems. The Balanced
Scorecard retains the historically widely-used financial measures and
supplements these with measures on customer satisfaction, enhancement of internal
processes and the creation of capabilities in employees and systems. The
context in which it was created was one of corporate culture: the benefits of
the approach are that it is based on a balanced set of indicators covering the
entirety of a company’s mission and goals, not just financial indicators.
It is necessary to adapt the Balanced
Scorecard approach for the not-for-profit sector, for example, by identifying
financial measures that are appropriate for institutions of higher education.
The above dealt mainly with
interdepartmental comparisons to enable university administrators to
effectively manage instructional, personnel and financial resources. The data
can be made even more meaningful if departments at one institution could be
compared with identical departments at other peer institutions. Such a need
triggered the Delaware study, which was a major national study of the
productivity of America’s faculty. That study resulted in consistent and
reliable benchmarking data that have been used in diverse and creative ways to
better explain what faculty do, while providing better information for managing
faculty resources and containing costs. These benchmarks can be used to make
comparisons with institutional data in order to more fully understand how a
college or university is using its resources and with what degree of economy
and efficiency. They should not be used as tools for rewarding or penalizing a
given institution’s academic departments or programs. Instead they are intended
as tools for helping colleges and universities find out why their institutional
data are similar to or different from the benchmarks. The benchmarks are indeed
very powerful information tools. The Delaware study presents them in a variety
of ways. They are presented in different analytical arrays. Research
universities prefer to compare their departmental teaching loads, instructional
costs and externally funded activity with those at other research universities.
It makes little sense to do head-to-head comparisons between two very
dissimilar departments or institutions- dissimilar in disciplinary orientation,
in emphasis on undergraduate teaching and in volume of separately budgeted
research. Benchmarks enable far more appropriate comparisons. It has been
observed that when dealing with faculty teaching loads, benchmark data that
displays productivity ratios (for example, student credit hours taught per FTE
faculty, class sections taught per FTE faculty and FTE students taught per FTE
faculty) is the one that provosts and deans rely on most. In the use of
benchmarking data, tenured and tenure-track faculty are an appropriate starting
point for analysis.
(The following gives an idea of how a
‘national benchmark’ may be calculated: All institutional responses for a given
variable are summed, and an ‘initial mean’ is calculated. In order to prevent
an aberrant piece of institutional data from exerting undue influence on the
data set, discrete institutional responses are then examined to identify those
that are more than two standard deviations above or below the initial mean.
These responses are flagged as outliers and are excluded from further
calculations. The remaining responses are then re-summed and a “refined mean”
is computed. This refined mean then becomes the national benchmark.)
The imposition of performance models on
institutions of higher education has become a widespread practice. National
systems are in place in France, Britain, the Netherlands, Scandinavia,
Australia and New Zealand. In federations like Germany, the US and Canada,
individual provinces and states have taken the initiative. Accountability and
service improvement are common goals of all higher education performance
models. But different national systems adopt different combinations of supplementary
goals. These include stimulating internal and external institutional
competition, verifying the quality of new institutions, assigning institutional
status, justifying transfers of state authority to institutions and
facilitating international comparisons. In England, The Higher Education
Funding Council (HEFCE) set up a Performance Indicators Study Group (PISG) to
develop indicators and benchmarks of performance. In the first stage of its
study, the group focused on producing indicators for the government and funding
councils that would also inform institutional management and governance. Its
immediate priority was the publication of institutional level, output based
indicators for research and teaching. Process indicators were rejected. By the
time of its first report (PISG 1999), the group had prepared proposals for
indicators relating to:
-
participation of
under-represented groups
-
student progression
-
learning outcomes and
non-completion
-
efficiency of learning and
teaching
-
student employment
-
research output
-
higher education links with
industry
The group also developed a set of ‘context
statistics’ for each indicator to take into account, for example, an
institution’s student intake, its particular subject mix and the educational
backgrounds of students. These will allow “ the results for any institution to
be compared not with all institutions in the sector, but with the average for
similar institutions.” The next stage of the study will look at the information
needs of other stakeholders, particularly students and their advisers. The
third stage will respond to a call from the Chancellor of the Exchequer to
improve the indicators on student employment outcomes. The PISG acknowledges
that performance indicators in higher education are “ complicated and often
controversial” and that “ the interpretation of indicators is generally at
least as difficult as their construction”. They note that performance
indicators require agreement about the values (inputs) that make up the ratio,
reliable data collection and a consensus that a higher ratio is “better” or
“worse” than a lower ratio. It is claimed that no other country produces
comparable indicators of higher education as the UK and that, therefore, no
meaningful international comparison is possible based on their indicators.
Netherlands as also many countries in Europe follow a ‘softer’ Dutch-style
model, involving qualitative measures and far less prominence for performance
indicators than in the UK and US. Thus, there seems to be no “ideal” model or
mix. Gibbons predicts “new benchmarking methodologies and the production of a
range of benchmarking studies right across the higher education sector” and the
use of quality indicators to rank universities “by region, by country and even
globally”.
The UK performance model also consists of a
Research Assessment Exercise (RAE) whose purpose is to enable the higher
education funding bodies to distribute public funds for research selectively on
the basis of quality. The RAE uses performance indicators. Institutions conducting
the best research receive a larger proportion of the available grant so that
the infrastructure for the top level of research in the UK is protected and
developed. The RAE assesses the quality of research in universities and
colleges in the UK. It takes place every four to five years. The RAE provides
quality ratings for research across all disciplines. Panels use a standard
scale to award a rating for each submission. Ratings range from 1 to 5,
according to how much of the work is judged to reach national or international
levels of excellence. Outcomes are published and so provide public information
on the quality of research in universities and colleges throughout the UK. This
information is also helpful in guiding funding decisions in industry and
commerce, charities and other organizations that sponsor research. It also
gives an indication of the relative quality and standing of UK academic
research. Furthermore, the RAE provides benchmarks that are used by
institutions in developing and managing their research strategies.
Many colleges and universities have moved
away from philosophical arguments about the public good derived from research
and service activities. Instead they have opted to supplement those arguments
with a language that speaks to both taxpayers and legislators - economic impact
studies. Such studies examine revenues generated from tuition and from
externally sponsored research or service contracts and grants as components of
faculty activity. Economic impact studies are now fairly commonplace among the
major US research universities :Ohio State University, University of North
Carolina and Pennsylvania State University are good examples. An economic
impact model can point out facts such as:
. Of the dollars… in total resources available
to the institution annually, only 20 percent was in the form of state
subsidies.
. The university acts as a good corporate
citizen, using dollars… of its current expenditures for public service and
extension activity.
. The total economic impact of university
employees and students on the state economy is in excess of dollars… in taxable
salaries and wages generated.
In India, the University Grants Commission
has begun the Higher Education Information Systems Project to develop a
‘transparent and comprehensive’ information system on the following:
. Monitoring of grants
. Collection of relevant data from various
institutions for statistical analysis consistent with international standards
.
Recognition and management of institutions and programs based on their
level of competence and performance
. Management of university and college
admissions to bring transparency into the
process.
. Research project management
. Expertise and facilities database to
improve the interface between academia and society.
(Performance Report for University of
Toronto is attached in Appendix 4. This contains information on where the
University of Toronto stands compared to major public research universities in
North America on various measures such as research and scholarship, scholarly
awards, library resources, technology transfer, retention rates in
undergraduate programs, student satisfaction and resources. The report also
compares the University with other ten largest research-intensive universities
within Canada. It also includes trends over time. The performance report for
University of Calgary is in Appendix 5).
3. Using quantitative benchmarking data:
A number of strategies are employed for
using national benchmark data as a quantitative basis for academic planning and
policymaking. The University of Delaware and the University of South Carolina
use Delaware study data to prepare departmental profiles for their provost and
academic deans. The University of South Carolina have extended their analysis
by incorporating study data into a web-based warehouse. Within that Web-based
framework, the university creates departmental profiles wherein departmental
productivity and expenditure measures are compared with benchmarks for discrete
groupings of a dozen or so peer institutions identified by the university as
opposed to larger aggregate groupings such as “research universities”. Deans
and department chairs at the university are expected to use these web-based
comparisons as a component of their annual strategic planning process and to
use them when justifying requests for modified funding levels. It has been
possible over the years for the University of South Carolina to identify a
customized peer group from among all the colleges and universities
participating in the Delaware study. As a member of the Southern Universities
Group, University of South Carolina receives the data from that consortium but
is free to select additional peers. The peer group must be no smaller than five
institutions, but the
upper limit of the peer group is defined by
the requesting institution.
The University of Utah takes a somewhat
different approach to benchmarking in that it focuses largely on a single
measure- student credit hours taught per FTE faculty- and concentrates the
analysis on two faculty groups : tenured and tenure-eligible faculty and other
full-time faculty who are non-tenurable. The volume of teaching activity, as
measured in terms of student credit hour generation within these two faculty
categories and compared with national and customized Delaware study benchmarks,
places departments in one of three groups : 1) highly productive 2) normal 3)
underproductive. Departments that are ‘highly productive’ are advantaged in
budget decisions, whereas ‘underproductive’ departments are disadvantaged and
are targets for budget reductions. The University of Utah does not, however,
make resource allocation and reallocation decisions solely on a single
quantitative measure. A number of other quantitative and qualitative factors
enter the budget decisions at that institution.
The push for discipline-specific data is
likely to become more pronounced. With a secure Web server in place, the vast
majority of the data collection and editing will take place within a web-based
environment. And by granting institutions access to the full data set, it then
becomes possible for a college or university to select different peer groups
for different academic departments. Now a dean can look at the list of Delaware
participants and choose the twelve that seem most appropriate for the physics
department while selecting a different set of twelve or so institutions for
chemistry and so on.
4.Qualitative benchmarking in individual
departments:
Context is a crucial component in the
examination of any quantitative information. A few characteristics like
international reputation and the philanthropic support generated are not
manifestly evident in a student-faculty ratio or assorted expenditure ratios.
Understanding the qualitative dimension of teaching, research and service is
crucial to a full picture of what faculty do and how productive they are. A
number of data sources in the public domain contain benchmark data that can
assist academic departments and programs in looking at the outputs of their faculty
relative to those at other institutions. In considering where a departmental
faculty is with regard to the overall quality of the academic program, as well
as scholarly output of the faculty, Research-Doctoral Programs in the
United States (Goldberg, Maher and Flattau, 1995) is an excellent starting
point. It ranks the leading academic programs, by institution and by
discipline, in the arts and humanities, engineering, physical sciences and
mathematics, social and behavioral sciences, and the biological sciences. A
comprehensive ranking of departmental faculties, by discipline within each of
the fields listed gives an idea of the quality of faculty in a given
institution’s departments. (The quality of faculty in a given institution’s
departments is assessed from responses to the National Survey of Graduate
Faculty. As this database covers only the American programs, programs and
faculty of institutions in India are not covered: however, institutions like
IISc can make use of this database and take further steps to assess the quality
of their own academic training programs vis-à-vis those listed here.)
To examine faculty scholarly output, the
National Research Council of the US looked specifically at papers published in
refereed journals and monographs produced by recognized publishing houses; they
also noted the impact of publications on the field, as evidenced by the number
of times they were cited. These data are accessible in computerized form in the
Institute for Scientific Information’s (ISI) US University Science
Indicators database. This database contains summary publication and
citation statistics that reflect scholarly production at over one hundred major
universities throughout the United States. This enables institutions to
identify the productivity of their own academic departments within the context
of the hierarchical rankings of the major national programs in the field –
often those that faculty aspire to join as peers.
Another benchmarking resource is the NSF’s
web-based WEB CASPAR (Computer Aided Science Policy Analysis and Research)
which allows for the retrieval and rank ordering of data, by institution and by
discipline related to number of graduate students and post-doctoral
appointments, volume of degrees awarded annually and volume of funding. WEB
CASPAR provides a three-year trend data consisting of surveys that represent a
rich data source for benchmarking externally funded research activity.
Institutions aspiring to the top one hundred institutions in externally
sponsored research and development funds or those wishing to know their
relative position in the higher education community with respect to externally
funded research find this an excellent benchmarking source.
Even with the above sources, the quality
dimension in research and service is difficult to assess in a comprehensive
fashion - the difficulty for most colleges and universities is in collecting
data on the qualitative measures outlined earlier, over time, from a pool of
institutions sufficiently large and comparable in mission to constitute an
appropriate benchmarking pool. The Delaware study hopes to fill this gap in qualitative data sharing
in the same manner that it has succeeded for quantitative data sharing.
5.Measures to be taken by Indian
Institute of Science:
. Establish an Office of Institutional
Research as in many American universities.
. Frame explicit statements of institute
mission, goals, objectives
. Develop a suite of performance indicators
/ quantitative and qualitative barometers to enable the institute
administration to make interdepartmental and other comparisons and effectively
manage resources.
. Decide – Whom do we compare ourselves
with? (those with similar size, with a similar teaching and research pattern,
in a similar sized country….?)
. Decide on and make use of national
benchmarks of other nations depending on which other universities we want to
compare ourselves with. Choose an
institution to provide a benchmark for success with the ultimate intention of
comparing the performance of Indian Institute of Science with this
institution(s) in the areas where they currently exceed us: indicators could be
turned into targets in time. (Realistic goals can be set by employing
benchmarks already achieved in other institutions).
6.Conclusion:
As it becomes more accountable in a
‘knowledge society’, there are doubts whether the university can survive in its
traditional form. Survival may depend on a much broader definition of
accountability, one that encompasses public and civic commitment. According to
Delanty G., the best way to guarantee the future of the university is to
reposition it at the heart of the public sphere, “ establishing strong links
with the public culture, providing the public with enlightenment about the
mechanisms of power and seeking alternative forms of social organization.”
There is a perception that the responsibility of researchers is to make their
findings available in the public sphere through publication. It is then the job
of society to use this knowledge. If this view of accountability were
sufficient, it would greatly simplify the job of developing performance
indicators for research organizations: publications alone would be enough.
Unfortunately, that indicator will not satisfy accountability demanding
constituencies. According to Cozzent and
Melkers, “ state S&T programs collect publication information, but find
that job creation is the primary indicator state legislators want to see”. Many
national science policies continue to be dominated by what has been called
‘linear thinking’. In the models which emerge from this thinking, science
functions as the source of technology and the engine of economic growth. In the
linear model, the universities and some government research laboratories are
paramount, being the institutions which carry out most of the basic research.
However, it is now being recognized that ‘knowledge production’ is increasingly
becoming ‘distributed’: knowledge
production has spread from academia into all those institutions that seek
social legitimatization through recognizable competence and beyond. Knowledge
production is increasingly a socially distributed process. The university, in
the emerging regime, must still be an instrument for the development of
science. The point is that it is no longer either the only or even the primary
institution on the cognitive landscape. The emergence of a socially distributed
knowledge production system brings to the fore the question of the
relationships between the university and the other knowledge producers.
Universities will need to become porous institutions, more revolving doors are
required which allow academics out and others in. Such development if carried
out on a significant level cannot but touch questions of career development and
reward structures, and with this challenge the existing structures. Many OECD
countries are increasingly putting resources into the diffusion of
existing information. In several countries, new organizational arrangements for
knowledge diffusion have been created. In Sweden, for instance, Competence
Centers affiliated to universities have been established as a strategic
resource for the technological renewal of industry. This is evident in
institutions in India too.
The
most exciting area of faculty productivity will be in curriculum development,
especially in view of the impact of technology on campuses. New teaching
paradigms such as problem-based learning (PBL) are transforming the ways
faculty teach and students learn on campuses. The volume of credit-hour
production will be supplemented with information on how and how well those
credit hours are delivered. PBL is a means of instruction based on complex
problems that have real-world implications. The challenge to faculty is to
provide instructional techniques that meet the real cognitive and skill needs
of students. Measuring what and how students learn is another faculty product
that is undergoing significant transformation. A letter grade used to be the
sole indicator for assessing what students learned in courses. There are
well-established psychometric instruments in critical thinking, problem-solving
and communicating, among other skills. Web-based electronic portfolios that
integrate and synthesize the knowledge gained through a broad cross-section of
courses is yet another tool for measuring student learning. However, it will be
the individual faculty member who will bear ultimate responsibility for
assessing cognitive gains in students. The development of appropriate tools for
making those assessments is quickly becoming part of the overall productivity
of faculty in the twenty-first century. Development of performance indicators
for universities has to necessarily take into account all of the above factors.
Reference: (to be listed)
Reference: (to be listed)
Middaugh M.F. 2001. Understanding Faculty Productivity: Standards and Benchmarks for Colleges and Universities. San Francisco: Jossey-Bass.
John de la Mothe.2001. Science, Technology and Governance. London. Continuum.
-------------------------------------------------------------------------------------------------------------------------
Manu Rajan
National Centre for Science Information
Indian Institute of Science
Bangalore 560 012
email: manu.rajan134@gmail.com
Subscribe to:
Comments (Atom)