posted Jul 15, 2014, 2:12 AM by Tim Carroll
[
updated Jul 15, 2014, 2:56 AM
]
Delivering software through an Agile Scrum methodology is a paradigm shift
for an organization. The mindset changes from strong project autonomy to
a much more cohesive relationship with a product. This can challenge the
human resource structures of traditional projectized units, including
budgets, position descriptions/assignments, career tracks, and people
management. The adoption of Agile Scrum methodologies and techniques is a
catalyst for this rethinking, because these methodologies require a project
team to acquire and concern themselves with domain knowledge of the
product at a much more detailed level for an extended period of time,
whereas traditional projectized team members align themselves more squarely
with a repeatable product-domain-independent skill-set (i.e. software
delivery on the java platform).
As this pendulum swings, an organization must be careful not to fall into
the trap of moving too far in the other direction (i.e. alignment with
product). Aligning human resource structures (again, budgets, position
descriptions, career tracks, and people management) too closely with a product
will make it difficult to isolate dollars associated with change initiatives,
and it will become much more challenging to fluidly move human resources once
products have reached a sustainable level of maturity. To prevent this, new
adopters of Agile Scrum should instead look to program management as a tool for
establishing the necessary medium-term alignment of human resources to
a particular product domain. Programs can be established as a wrapper
around Agile Scrum managed projects and used to center groups of people from
traditional projectized roles around a particular product domain without
tipping the apple cart and preventing those resources (both dollars
and people) from being easily reallocated to other projects and
programs at a later date.
Since the application of program management does not tear down projectized
units or the projectized human resource structures, there are various other
concrete short-term and long-term benefits to the organization. First, budgets
dollars can continue to be allocated to change initiatives and isolated from
service budgets in a way that promotes transparency and helps to prevent
unintended internal reallocations. Next, position descriptions, career tracks,
and all associated people management remain relevant and intact,
which supports both employee recruiting and retrainment. In other words,
the professionalization of skill-sets and all the discipline involved in
standing up a sustainable service will be unharmed, and all the human resource
infrastructure and product development processes an organization has worked
hard to achieve can still be utilized.
|
posted Jan 20, 2012, 4:47 PM by Tim Carroll
[
updated Mar 29, 2013, 2:29 PM
]
In this downtrodden economy, the concept of "do more with less" has
become routine and cliche in our industry. Particularly in Higher Ed.
Although "do more with less" is no doubt desirable, I believe a more
realistic spin on that catch phrase is "do less, produce more!"
A do less, produce more model consists of working smarter, making rational
decisions, and setting reasonable constraints. Information technology
organizations should be spending more time on strategic and innovative
thinking, but most of them are too bogged down in the minutia caused by
integration chaos. These organizations have to reduce diversity, not in
their culture or their workforce, but in their technology footprint. To do
this, the I.T. leadership team must better understand their role in the
organization. They must define and enforce a set of rules that transcend
guiding principles or best practices and solidify the ability to meet the
goal of do less, produce more. This starts with setting policies,
requirements, and restrictions that prevent irrational decisions and
self-destructive behavior, like:
-
purchasing the cheapest network hardware, but increasing the cost of
support threefold due to the intellectual investment and cross-platform
integration requirements.
-
implementing quick proprietary one-off software solutions to satisfy
immediate customer demands, but increasing the cost of delivery and
support tenfold, through the addition of new languages, new platforms,
new framework libraries, and new protocols.
As examples, lay ground rules around what languages developers can use,
what protocols they can use to exchange information, and what canned
frameworks should be used for dependency injection and RESTful service
development. The ability of one developer to implement a new system in 90
days using Ruby on Rails doesn't really make your organization more agile
when everything else is written in Java and all the other developers and
support staff are trained in Java. Instead, it results in the organization
owning two ways of doing everything and introduces the need to support a
whole other facet of the I.T. infrastructure and personnel skill-set
to continue delivering that new "cheap" application for years to come.
Technologies will continue to change, best of breed equipment and software
will forever be a moving target, and cutting edge tools and techniques
will always be emerging. New trends are intriguing and interesting to
technical staff, and avoiding them may seem like suicide; however,
technological romances will hold an organization hostage. Sticking with
what they know, an organization can solve more of the mundane problems
that plague end users day-to-day with less effort. Once they begin to
accomplish that, then they create time for themselves to explore and create
cutting edge technologies, rather than chase them.
|
posted Oct 12, 2011, 6:52 AM by Tim Carroll
[
updated Mar 29, 2013, 2:30 PM
]
Remote working relationships have become fairly commonplace in the I.T.
industry, and they are getting more prolific as many families are host to
more than one career. For career minded people, advancement opportunities
are hard to pass up; although, when both spouses work and the opportunity
requires a physical move, it is sometimes hard to get the dollars to make
sense. Moving is of particular concern when considering the perpetual
uncertainty in the economy of the new millennium. In any case, some
families take the plunge and leave the other employer scrambling to
replace an experienced and otherwise reliable employee.
As a result, smart organizations are finding ways to retain these people,
by being more flexible and by formalizing remote working relationships
and policies. For the most part, remote working relationships have
traditionally been geared toward satellite team members; however, more
recently managers have began to work from afar, presiding over teams that
are either geographically distributed or collocated. In my case, I am a
manager that oversees multiple teams of software developers that are
collocated, so I am the satellite with about 900 miles of separation
from my teams. Remote management had no documented precedent in my
organization; however, my employer is cutting edge in many areas, and
learning to work smarter and to be more flexible is no exception.
There is no doubt that geographical separation between managers and the
teams they lead can introduce communication barriers and present leadership
challenges that need to be managed properly to preserve team morale and
ensure high performance. However, there are management tools and techniques
that can address these issues in a positive way, allowing managers to
successfully lead people and teams remotely. I have been managing remotely
for over a year now, and I'd like to share a little bit about our approach
and what has worked for us.
When I learned that I was soon going to be leading my teams from a distance, I
started doing some research to see who else was doing this and how it was
working out. After reading through a good portion of the information
available, I began to see some common themes. At some point during the
information discovery process I began to build a table of data that later
became the core of my remote management strategy. That table, the
"Specific Strategies to Overcome Known Challenges" (below), is a simple two
column list with each common "obstacle" on the left and a countering "tactic"
for handling it on the right.
Obstacle
|
Tactic
|
Maintaining a clear sense of team identity without the daily face-to-face
presence of the leader.
|
Strengthen existing team identity by merging the AAA Team and the
BBB Team into the CCC Team that serves both sets of customers. Creating
a stronger cohesive sense of shared purpose and commitment to team goals.
|
Developing a virtual whiteboard for sharing and collaborating on projects,
as well as communicating expectations, progress, and direction.
|
Rely on familiar technologies, tools, and techniques that we have developed
in our time working together (in the same building) to serve as a suite of
virtual whiteboarding tools:
-
Our established Jira presence to set expectations through project statements
of work and worth, as well as resource assignments and prioritization among
these defined projects.
-
Our established Wiki presence to facilitate collaboration and documentation
of development activities.
-
Our established Chat rooms for holding adhoc virtual conversations.
-
Skype to allow real face-to-face meetings online.
Explore the use of emerging technologies, tools, and techniques to
fill the gaps and enhance our new working environment:
-
Such as Google Wave to allow multiple team members to work on presentations
and documents simultaneously.
-
Such as Twitter to share real-time thoughts and accomplishments
|
Replacing the casual pat-on-the-back and formal team celebrations.
|
Establish a monthly team communication that formally acknowledges individual
and team accomplishments, and arrange for team get togethers to celebrate
achievements during scheduled return visits to the office.
|
Replacing the occasional lunch or hallway conversation about personal interests.
|
-
Champion the use of social networking sites such as Twitter and Facebook
to serve as a method of keeping in touch with the personal interests of
team members.
-
Renew the FTF meeting schedule to allow dedicated 30 min per week
conference call or Skype conversations with each employee.
|
Recognizing and resolving conflicts in an environment where people are not
able to sit down together and talk-through the problems.
|
Be hypersensitive in the early recognition of conflict and provide a
clearly defined conflict resolution path that frames the debate with
facts and potential outcomes, then follows through by documenting and
rationalizing the decision. The process must separate the issues from
the people and focus on facilitating a resolution without letting emotions
create barriers to a mutually agreeable outcome.
|
Replacing troop rallies where a leader is tasked with reminding people
of the overall mission, restating the vision, and describing the
rationale behind the choices that have already been made.
|
Create frequent Skype conference calls that include all members of a
project team to discuss overall status, share progress, and re-iterate
the delivery plan. Augment this by establishing a blog-style
"Decision Rationale Journal" that can serve as a reference and reminder
for past decisions.
|
Creating a workable continuous improvement program for enhancing,
replacing, and renewing the approach outlined in this document to
account for shortfalls in the original plan.
|
Administer a quarterly survey of all team members to collect ideas
and field concerns that are a result of the remote management arrangement.
|
Looking back upon that strategy and more specifically the aforementioned
table, I believe it remains a really good starting point. All the obstacles
have proven to be relevant and the tactics are reasonable; however, we have
really learned to rely on a small subset of the original list of
tactics. This makes sense, as the tactics are the hard part; finding the
tools and techniques that work for your team will vary and evolve. We had
a plan, we tried a lot of things, and we kept what worked. At the end of
the first year, we ended up with three types of meetings and four tools that
facilitate the bulk of our work and working relationships:
Essential Meetings:
-
I hold weekly FTF meetings with each employee on my team. The agenda covers
annual goals, reviews previous action items, and discusses current events. In
the meeting, we document the discussion topics and assign new action items.
We discuss details of tasks at hand, negotiate timelines, and clear up
questions about priority. The one-on-one nature of the meeting also creates
the opportunity to discuss more private HR issues without the awkwardness of
scheduling a special meeting to share them. These meetings are held via
conference calls with occasional video to make it a bit more personal.
-
I host a weekly whip-around meeting that includes all members of all teams.
The agenda is simple; each person shares their biggest accomplishment of past
week and their biggest challenge of coming week. We don't dive into technical
details, and the meeting typically lasts 15-30 minutes. The meeting promotes
transparency, awareness, and an appreciation among teammates. It challenges
each team member to bring something significant to share. These meetings are
also held via conference calls.
-
I facilitate project standups (SCRUMs) as required. Some weeks we may meet
three times, whereas other weeks we don't meet at all. These are delivery
focused sessions where we review tasks, timelines, issues, design details,
and implementation choices. These are documented with a running agenda that
lists tasks and issues with details on who is responsible, how it will be
accomplished, and when it will be delivered. These meetings are also held via
conference calls, but they nearly always include document sharing and
co-authoring, as well as collaboration and sharing via chat windows and screen
shares.
Essential Tools:
- wiki
- phone and video conferencing
- screen share
- electronic chat (and chat presence)
Essential Techniques:
In the end, as a remote manager you need to BE THERE, and your employees
need to know it.
-
Establish joint goals to get people talking and working together. This keeps
them together even when you are not there, and it helps create a healthy team
environment.
-
Breaking the work up into manageable chunks that only provide part of the
solution; this requires them to communicate in order for all the pieces to come
together to form a whole. This will typically result in a better
architecture, and it will definitely result in a healthier team environment.
-
Don't forget to create the virtual drive-by by dropping in to see how tasks
are going. Use electronic-chat to engage employees frequently through out the
day and make yourself visible and available through chat-presence. People
will naturally try to solve problems on their own, but sometimes you want them
to ask for help. There are times when employees can feel ignored. Prompting
them to talk through an issue will help ensure that the problem or solution is
clear, but equally importantly, it will let them know you are there.
-
Don't assume people are talking to one another just because they sit next to
each other. Ask your people if they have talked to one another about specific
topics or issues, and arrange for them to have discussions.
-
Don't live in your inbox. Synthesize communications and capture them in a
place that everyone can continue to reference. Establish a decision rationale
journal and a chronological activity log that you and others can lean on to
show what has transpired over time. This will serve as a testament to
progress toward goals, and a reminder of hurdles that have been overcome.
Some of this may seem like communication overkill, but it is a replacement for
what you already do today without even trying. Without the water-cooler
conversations, without the drive-by conversations, without the occasional
lunch-n-chat, you will have lost your leadership edge. Remote managers have
to ensure this previously informal communication continues. Remote managers
need to BE THERE! When you schedule all these things, employees may grumble
and they may complain, but in the end they will grow to appreciate and rely
on them as much as you will.
The steady growth in remote working relationships, due to globalization and
other factors, suggests that all organizations must eventually confront this
problem. Thankfully, these challenges are not without solutions; in fact, they
are often times facilitated by technologies that we currently use to solve
other problems much closer to home.
|
posted Nov 16, 2010, 6:43 AM by Tim Carroll
[
updated Mar 29, 2013, 2:30 PM
]
Portal Mobile Theme
I've been to a few conferences recently, where there has been a lot of
discussion about delivering applications to mobile users. Over the past
few years, there has obviously been a lot of growth in that area with
the iPhone and Android competition etc. Although these platforms are
both very cool, they don't make it any easier to reach everyone.
In fact, it is really beginning to fragment the user population and make
it even more difficult to reach your customer.
Each of these platforms, as well as other less adopted ones, require
knowledge beyond that of the typical web developer. They use a rich-client
server paradigm, along the lines of the desktop metaphor on PCs, rather
than a browser based approach. Some, including the Apple iPhone, also use
a development environment (Objective C in this case) that is not a
traditional web development language. Not to mention, reaching mobile
users requires a paradigm shift in user interface design.
These hurdles make it an uphill climb for web developers that are
willing to make the transition. However, there is some good news; all
of these platforms still have a browser too. A web browser leaves
the door open for traditional web developers to reach the mobile
audiences. With a good strategy this can be equally effective and
much less expensive.
This actually reminds me of a comment that I added to a java.net blog
post a couple years ago (
JSR-286: The Edge of Irrelavence
)... my comment:
well constructed argument, but i think there is more at play. the
argument does not account for the perspective or power of the
end-user. the lack of traction of the JSR-168 specification is directly
related to the barrier to adoption, and that is two fold:
-
commercial vendors have never had any incentive for standardizing
their platform or framework for content delivery
-
developers have never had strong reasons to design modularized
user interfaces
the onslaught of mobile devices forces a shift in this space.
commercial vendors now have a market for selling the individual
applications that used to make up their proprietary suite of
applications, and developers are being forced to to design
modularized user interfaces in order to reach their users. it
seems that this could change the way vendors do business, if
they start to ask the question, "how do we get our applications
to users on their phone?". this could be pie in the sky... however,
if this led to vendors decoupling themselves from their proprietary
framework for application delivery to begin profiting in sale of
applications outside their framework, then this could be the
slippery slope into removing themselves from the business of
providing that framework for delivery... at the same time mobile
applications could create a movement among developers to design
one user interface for delivery via phone or browser. many
developers hate user interface design anyway, so designing one
that fits both needs could be an easy sell. ... the mobile
revolution could be the catalyst for further portal adoption. this
path could lead both of them to the now maturing standard for
page fragment delivery, JSR-286. And more importantly, embracing
community source solutions that are way ahead of the curve in the
implementation of containers that deliver on these
standards (i.e. JASIG uPortal).
Portal Desktop Theme
Users are not likely to stop using desktop PCs anytime soon, but the
trend toward mobile computing is clearly not going to end. To meet all
these people where they enter the internet will require change;
however, that change can be incremental. Adopting a good portal
framework can help you achieve a quicker time to market at a lower
cost.
We [LogicLander] believe that a portal still provides a strong
delivery platform, and a good portal framework will give you the
capability to reach mobile users with very little extra effort. In
other words, you develop applications once, you deploy them to one
place, and they are delivered everywhere. This gives you the ability
to reach users on Mac desktop, Windows desktop, iPhone mobile, Android
mobile, and others without the extra headaches described above. This
AT MINIMUM is a good transitional strategy, one that you cannot meet
with any other single technology.
Portal Application
Portals offer developers a delivery framework with many well
known benefits including authentication, authorization, group management,
high-level navigation, end-user customization, and organization
branding. These features allow developers to concentrate on solving
business problems, rather than wasting time re-hashing organization
and organizational integration details. In addition to these very
tangible benefits, the portal user experience has long required
developers to think along the lines of a mobile delivery interface,
pushing them to deliver modular content that consumes less screen
real-estate. This has led portal developers to provide rich and
well-organized user interfaces that answer 80% of the at-a-glance
need, with the ability to click on relevant datapoints and dig
deeper.
This environment and background gives portal developers an edge in the
world of mobile development, as they are more skillfully prepared to
think this way. If your organization has already implemented a
portal, then you are ahead of the game. If not, we believe it is a good
next step for solidifying your position in the mobile world.
|
posted Oct 19, 2010, 11:51 AM by Tim Carroll
[
updated Mar 29, 2013, 2:29 PM
]
Open source software presents a compelling compromise between vended solutions and in-house development efforts. It reduces or eliminates the cost of software licensing, while offering a functional product for implementation that can be customized and enhanced to fit the needs of a specific organization. By definition, the product code is fully available for modification, so it does not have the configuration or implementation boundaries of a vended application; however, there is an inherent and sometimes explicit responsibility to contribute enhancements and aid in the support of the product as a whole, so the product is not necessarily free or owned by the implementing organization.
Ultimately, open source vendors, communities, and individuals strive to provide and maintain a product that fits the needs of a broad customer base; therefore, organization specific customizations, whether contributed or not, are not typically a priority in supporting the product. In fact, some open source product providers do not allow contributions that create an organization specific feature or branch in the source. This makes organization specific customizations a slippery slope away from community based product support, toward the risk and liability of owning an in-house solution.
This paradigm creates a new challenge in service management, and it calls for a balancing act when pursuing new features and production behaviors. In order to maximize the benefits of adopting open source software products, the implementation and service administration team must work closely with the open source product provider to make design choices that do not deviate too far from the product vision and to be mindful of the impact of implementing new requirements.
Having a starting point and nearly free reign over customization can tempt a team to evolve the source beyond the capacity of the organization to support it. However, don't forget that when you either can't or don't contribute the enhancements back to the community, then you own it... And that my friend can hold the organization back, as well as hurt the reputation of the open source movement.
|
posted Oct 19, 2010, 11:37 AM by Tim Carroll
[
updated Mar 29, 2013, 2:30 PM
]
I appreciate that Google provides me all these convenient and easy to use tools to host my domain, and they do it all for free. However, it has always been agitating that users could not get to my website without typing the infamous dub, dub, dub at the front. Recently, I found a way to conquer this problem, and I thought others might want to do the same. Interestingly enough, I was able to accomplish this by using another free Google tool. Blogger! A domain without a prefix of some kind like "www." or "mail." is referred to as a "naked domain". For example: - public subdomain = www.yourdomain.com
- private subdomain = mail.yourdomain.com
- naked domain = yourdomain.com
Many people, in haste or habit, will leave off the "www" at the beginning of a web address when typing into the browser location bar (as in bullet three above), and they expect to land on the www home page of the site. Most domains are configured to send you to www.yourdomain.com even when the www is left off. With Google domain hosting (the free version anyway), there is no out-of-box way to configure this option. Therefore, you're forced to use a hack. However, it just so happens that Blogger.com, Google's free blogging application, has the ability to resolve a naked domain. So, that is what I used to train my Google hosted domain to resolve logiclander.com. Here are the steps you can take to accomplish this for your Google hosted domain: - Create an account at blogger.com (or, use an existing account)
- Create a new blog with this account. Call it something simple like "nkdom" for naked domain or something else that is available. This is non-de-script, but the name is not important. This is not a blog that you will not direct anyone to or post to for that matter.
- Create a CNAME entry at your domain registrar (i.e. godaddy.com or wherever you're Google DNS service is registered). Call it something like "nkdom" too, and point it to ghs.google.com (like your other CNAME aliases for all things hosted at google). Your domain registrar screens will vary, but your entries will look something like - Illustration A - below. NOTE: The name is not important, as no one will actually navigate here in the address bar. Also, the cname DOES NOT have to match the blog name created above, but it will NEED TO match the name in the next step.
- Back at blogger.com: Navigate to Settings --> Publishing for your naked domain (nkdom) blog. On this form select the option to publish to a custom domain, then enter the fully qualified domain name using the CNAME that you created at your domain registrar above. For example, http://nkdom.yourdomain.com. Also on this screen, click to put a check in the "Redirect yourdomain.com to nkdom.yourdomain.com". IMPORTANT... This is the whole reason your here. This is the feature that Blogger.com offers to enable naked domain aliasing.
- Now navigate to Layout --> Edit HTML for your naked domain (nkdom) blog. Use this form to add a <meta http-equiv="refresh" content="0;url="http://www.yourdomain.com"/> tag to the head of your blog template. As with all the examples here, remember to change references to "yourdomain" to your actual domain name.
- Back at your domain registrar: You will need to create "A" records for one of more of the Google Apps IP addresses. I just entered all of them. NOTE: do not remove the wildcard entry that routes all of your named subdomains. In the end, you will have five-ish entries that apply to your Google hosting (* points to 216.21.239.197, and four blank entries that point to 216.239.32.21, 216.239.34.21, 216.239.36.21, 216.239.38.21). Again, your domain registrar screens will vary, but your entries will look something like - Illustration B - below.
- Now, you play the waiting game. It can take several hours for your registrar to update it's DNS tables; however, I have generally seen this take affect within an hour or two.
Illustration A:
 Illustration B:
 After this takes affect, typing yourdomain.com (without the dub, dub, dub) in the browser location bar will get resolved to the Google IP. Then, Google will know that the domain belongs to a blog from Blogger.com, so it will reply with a 302 redirect to nkdom.yourdomain.com. Ultimately, this is simply using your naked domain blog as a soft redirect to your www.yourdomain.com. If you've made it this far, you're probably asking... Why doesn't google just have that same "Redirect domainname.com to nkdom.yourdomain.com" checkbox in my Google Apps domain control panel? Good question! Enjoy... Hope this helps. |
posted Oct 19, 2010, 11:04 AM by Tim Carroll
[
updated Mar 29, 2013, 2:30 PM
]
I recently completed a technical proof-of-concept project using the eDocLite functionality of Kuali Rice 0.9.3. The project went well, and it resulted in the call for a pilot to begin in early February. Since the proof-of-concept effort, a new version of the Rice framework was released, and it has some fairly dramatic data model and identity management changes. I'm actually having regular nightmares about the data migration process that is underway in our conversion from uPortal 2.6.1 to 3.1.1; therefore, moving forward on a platform that was already losing favor to the new flavor doesn't seem like a prudent choice. In an attempt to avoid such a venture with the Kuali Rice product, I recently installed the latest version of the framework and began slugging away. Lucky for me, most of my hard work remains intact. However, I did find a few gotchas, so I thought I'd share what I had uncovered. There are two main categories of changes that have an impact on the eDocLite workflows. First, a re-factoring that packaged Rice as a product of the Kuali Foundation, replacing the legacy package names that chronicled its history as an open source effort kicked off by some of the key players in higher ed community source. And second, a rewrite that unifies the identity management componentry across all the current Kuali projects (i.e. Financials, Coeus, Student). I was able to fix issues with the package name refactor through a few simple search and replace operations in my eDocLite source files before importing them using the Rice Ingester: - replace "edu.iu.uis.eden.edl." with "org.kuali.rice.kew.edl."
- replace "edu.iu.uis.eden.routetemplate." with "org.kuali.rice.kew.rule."
Keep in mind, that there may be other similar situations, but these are the only ones that I encountered. The changes prompted by the identity management rewrite are not much more complicated, but they do require a bit more explanation. - Importing Users: The xml syntax for importing users is the same, because the developers used an adapter pattern to map the old xml nodes to the new data model. However, the mappings are not readily apparent, and they don't seem to be documented anywhere yet.
- <displayName> and <uuId> doesn't seem to map to anything
- <workflowId> maps to krim_prncpl_t.prncpl_id
- <authenticationId> maps to krim_prncpl_t.prncpl_nm
- <emplId> maps to krim_entity_emp_info_t.emp_id
- <emailAddress> maps to krim_entity_email_t.email_addr
- <givenName> maps to krim_entity_nm_t.first_nm
- <lastName> maps to krim_entity_nm_t.last_nm
- Importing Groups: The xml syntax here is quite different mostly due to the need to support group namespaces. Beyond the required changes below, there are also some optional additions. A few good samples of the new format can be found in the kuali source available here.
- the container tag <workgroups> changes to <groups>
- the container tag <workgroup> changes to <group>
- there is a new tag <namespace> for qualifying groups (I used KR-WKFLW for all mine) and maps to krim_grp_t.nmspc_cd
- <workgroupName> changes to <name> and maps to krim_grp_t.grp_nm
- <description> remains <description> and maps to krim_grp_t.grp_desc
- the container tag <members> remains <members>, but the children change:
- <authenticationId> changes to <principalName> and maps via krim_prncpl_t.prncpl_id to provide krim_grp_mbr_t.mbr_id
- <workgroupName> changes to <group>, and is now a container for <name> and <namespace>, and it maps via krim_grp_t.grp_nm, krim_grp_t.nmspc_cd to provide krim_grp_mbr_t.mbr_id
- Referencing Groups: All groups now require a namespace, and an abbreviated syntax is available that works for eDocLite references. This is accomplished by prefixing the group name with the namespace followed by a colon. For example:
- <superUserWorkgroupName>rice-admin</superUserWorkgroupName>, becomes
- <superUserWorkgroupName>KR-WKFLW:rice-admin</superUserWorkgroupName>
- Helper Class: The footprint of the isUserInGroup method found in WorkflowFunctions also changes to accommodate group namespacing (boolean isUserInGroup(String namespace, String groupName)). For example:
- <xsl:variable name="authZ" select="my-class:isUserInGroup('rice-admin')"/>, becomes
- <xsl:variable name="authZ" select="my-class:isUserInGroup('KR-WKFLW','rice-admin')"/>
After making these modifications, I was able to demonstrate my eDocLite workflows using the latest version of Rice. I believe these same changes will address the needs of most eDocLite conversions; although, I'm sure there are some rocks left unturned. In any case, I hope this helps some folks save some time. |
|