RECMGMT-L Archives

Records Management

RECMGMT-L@LISTSERV.IGGURU.US

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Tod Chernikoff <[log in to unmask]>
Reply To:
Records Management Program <[log in to unmask]>
Date:
Fri, 20 Apr 2012 15:33:59 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (114 lines)
A number of very good points Larry.  Waiting to see what the results of the
responses to the Presidential Memo show, the resultant directive and future
changes to Federal RIM...Should be interesting.

As to the use of tools such as Agency self assessments (have conducted a
number of them), the GARP assessment (have not seen it yet) or the
NetDiligence Tools (have used both versions with some very large clients) it
can be hard to aggregate the answers, and the numbers/score that result
(especially the comparisons to others who have taken the survey - last I
used it the actual number of prior user to compare to was lower than I had
expected to see, and I had to ask what that number was IIRC) can be hard for
some (not all) organizations to interpret.  In some cases it is not the
numerical results that are the most important, but reading between the lines
to determine why the response was what it was.  And thinking back to a
meeting yesterday with a client, there was one person who was commenting
that there are some people who believe self-assessments are useless anyway.

Post meeting I commented to a workmate that the assessment or resultant
score is not the most important part, but what comes after - the roadmap,
strategy, and actions to improve on the current state.

Happy Friday All!

Tod Chernikoff, CRM, CIP
[log in to unmask]
www.twitter.com/tchernik 

-----Original Message-----
From: Records Management Program [mailto:[log in to unmask]] On Behalf
Of Larry Medina
Sent: Friday, April 20, 2012 2:28 PM
To: [log in to unmask]
Subject: Re: [RM] GARP Assessment

All of this is well and good, except no one is validating or verifying the
data provided when collected during the assessment- after all, it *IS* a
SELF assessment.

Unless the individuals providing input from the various data collection
points are familiar with the requirements of each element and are ranking
them honestly across the entire enterprise when performing an assessment,
you're going to get mixed results (at best) when the data is aggregated.
And when data of this type (performance against criteria) is captured and
presented to management (who is generally the ones that authorize the
expenditure of effort and allocate costs to do a study) there aren't many
participants that are willing to bare their souls and confess their
shortfalls.

As an example, Federal Agencies have been performing self assessments of
their RM Programs for years now, with multiple collection points on a range
of criteria, each collector entering data as they interpret (or desire to
present) it and then the data is aggregated to give a 'score' for the
Agency.  If you've been reading the results following the collection and
input over the past 4-5 years, it seems few (if any) Agencies are below an
80% rating... however less than 5% of them are managing electronic records
to the criteria established and less than that have email managed in any way
other than to "print and file" that which represents a record.  This was
proven out starting in 2010 when the IG's office began spot checks of the
self-assessment data and writing findings that determined the input was
SEVERELY FLAWED.

Next? Enter the Presidential Records Memo... requiring the appointment of an
individual to respond for each Agency, albeit on a limited number of
criteria and the intent was for those responses to go to OMB.  If that had
happened, the data would have been aggregated by a neutral source and the
findings would have been taken at face value- and I personally think (yes,
this is opinion here) that the results would have been honest, bare bones,
and criticism would have fallen OPENLY on the weakest link in Federal RM,
the Agency that provides guidance and direction to Federal Agencies.

Instead, a change happened mid-stream and respondents were told to send
their responses to NARA.  Although I only heard "live tweeted" content from
two of the DC/VA meetings where NARA representatives made presentations and
read a few articles from the AOTUS, I can tell you the resultant findings
are NOT being portrayed as honestly as the input was presented by Agencies
I've communicated with.

So, with this said... if you are buying an 'assessment tool' from an
organization that uses criteria that may not be pertinent to all facets of
an organization, or if your organization places different "weight" on
certain facets than others, an aggregated product may have little value to
you in determining how good or bad your program is.

Similarly, if the resultant data is being crunched by someone else,
'anonymized and aggregated', and they use this to generate statistics for
'benchmarking results across industry and other business metrics' then you
should either be given a discount on your purchase for assisting them, or
the tool should be free.

The primary point here is like all surveys, the data that comes out of them
is only as good as the validation of the data that goes in- and unless this
is done by a neutral party that "has no dog in the fight", and all data for
any given industry segment is verified to be the same, then how can you use
the results (no matter the sample size) for benchmarking?

Larry
[log in to unmask]

--
*Lawrence J. Medina
Danville, CA
RIM Professional since 1972*

List archives at http://lists.ufl.edu/archives/recmgmt-l.html
Contact [log in to unmask] for assistance To unsubscribe from
this list, click the below link. If not already present, place UNSUBSCRIBE
RECMGMT-L or UNSUB RECMGMT-L in the body of the message.
mailto:[log in to unmask]

List archives at http://lists.ufl.edu/archives/recmgmt-l.html
Contact [log in to unmask] for assistance
To unsubscribe from this list, click the below link. If not already present, place UNSUBSCRIBE RECMGMT-L or UNSUB RECMGMT-L in the body of the message.
mailto:[log in to unmask]

ATOM RSS1 RSS2