RECMGMT-L Archives

Records Management

RECMGMT-L@LISTSERV.IGGURU.US

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Larry Medina <[log in to unmask]>
Reply To:
Records Management Program <[log in to unmask]>
Date:
Fri, 20 Apr 2012 11:27:44 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (74 lines)
All of this is well and good, except no one is validating or verifying the
data provided when collected during the assessment- after all, it *IS* a
SELF assessment.

Unless the individuals providing input from the various data collection
points are familiar with the requirements of each element and are ranking
them honestly across the entire enterprise when performing an assessment,
you're going to get mixed results (at best) when the data is aggregated.
And when data of this type (performance against criteria) is captured and
presented to management (who is generally the ones that authorize the
expenditure of effort and allocate costs to do a study) there aren't many
participants that are willing to bare their souls and confess their
shortfalls.

As an example, Federal Agencies have been performing self assessments of
their RM Programs for years now, with multiple collection points on a range
of criteria, each collector entering data as they interpret (or desire to
present) it and then the data is aggregated to give a 'score' for the
Agency.  If you've been reading the results following the collection and
input over the past 4-5 years, it seems few (if any) Agencies are below an
80% rating... however less than 5% of them are managing electronic records
to the criteria established and less than that have email managed in any
way other than to "print and file" that which represents a record.  This
was proven out starting in 2010 when the IG's office began spot checks of
the self-assessment data and writing findings that determined the input was
SEVERELY FLAWED.

Next? Enter the Presidential Records Memo... requiring the appointment of
an individual to respond for each Agency, albeit on a limited number of
criteria and the intent was for those responses to go to OMB.  If that had
happened, the data would have been aggregated by a neutral source and the
findings would have been taken at face value- and I personally think (yes,
this is opinion here) that the results would have been honest, bare bones,
and criticism would have fallen OPENLY on the weakest link in Federal RM,
the Agency that provides guidance and direction to Federal Agencies.

Instead, a change happened mid-stream and respondents were told to send
their responses to NARA.  Although I only heard "live tweeted" content from
two of the DC/VA meetings where NARA representatives made presentations and
read a few articles from the AOTUS, I can tell you the resultant findings
are NOT being portrayed as honestly as the input was presented by Agencies
I've communicated with.

So, with this said... if you are buying an 'assessment tool' from an
organization that uses criteria that may not be pertinent to all facets of
an organization, or if your organization places different "weight" on
certain facets than others, an aggregated product may have little value to
you in determining how good or bad your program is.

Similarly, if the resultant data is being crunched by someone else,
'anonymized and aggregated', and they use this to generate statistics for
'benchmarking results across industry and other business metrics' then you
should either be given a discount on your purchase for assisting them, or
the tool should be free.

The primary point here is like all surveys, the data that comes out of them
is only as good as the validation of the data that goes in- and unless this
is done by a neutral party that "has no dog in the fight", and all data for
any given industry segment is verified to be the same, then how can you use
the results (no matter the sample size) for benchmarking?

Larry
[log in to unmask]

-- 
*Lawrence J. Medina
Danville, CA
RIM Professional since 1972*

List archives at http://lists.ufl.edu/archives/recmgmt-l.html
Contact [log in to unmask] for assistance
To unsubscribe from this list, click the below link. If not already present, place UNSUBSCRIBE RECMGMT-L or UNSUB RECMGMT-L in the body of the message.
mailto:[log in to unmask]

ATOM RSS1 RSS2