Fixed price
8.000 €
per Automic system · excl. VAT
Full Guide · Consulting Package

Automic System Health Check — stability & performance

Full Guide for the Automic System Health Check consulting package — the 5-day expert review Tricise delivers to audit and optimise one Automic system.

Price
8.000 € excl. VAT
Duration
5 days
Delivery
Remote / on-site
Scope
1× Automic system
1 · Why a Health Check matters

Why a structured Health Check

Most enterprise Broadcom Automic Automation landscapes follow the same story: a stable initial install, years of quiet operation, gradual extension, and a growing assumption that “it just runs”. Schedules grew, agents multiplied, new applications were connected, versions were upgraded — and the underlying platform kept up, but nobody has taken a structured look at whether it is still sized, configured and tuned the way the current workload requires.

The Automic System Health Check is Tricise’s answer to exactly that situation. It is a 5-day, expert-led, fixed-price review of one Automic system, designed to identify weaknesses, uncover optimisation potential, and produce a structured, documented roadmap for remediation. The engagement is delivered remotely or on-site by Broadcom-certified Tricise experts and uses our own SQL-based analysis tooling to measure how the platform is actually behaving — not just how it was set up years ago.

The guiding principle behind the Health Check is simple: never change a running system — but never assume a running system is still optimal either. Historically grown Automic environments often carry settings that have been obsolete for several versions (for example the legacy FT_VERSION parameter in UC_HOSTCHAR_* system variables), over-provisioned resources from a different era of workload, or under-tuned components that are quietly approaching their limits. The Health Check surfaces all of that in one structured engagement so your team can prioritise what matters, without having to first run the discovery themselves.

A clear picture of the current platform health is also the foundation for informed planning across several fronts — sizing decisions, upgrade projects, agent consolidation, performance tuning, and operational budgeting. None of these conversations are productive without numbers and documented findings. The Automic System Health Check is how you get them.


2 · Method

The 5-day method

The Automic System Health Check is delivered as a single, fixed-scope engagement of five consecutive working days. The five days are not a rigid day-by-day agenda — they are a budget of expert time that is allocated across four clear stages: preparation, data collection, analysis, and joint walkthrough. The exact distribution depends on the landscape, the access situation, and how quickly your team can answer questions along the way.

STAGE 1 — PREPARATION & ACCESS
Scoping and environment access

Before the engagement begins, Tricise and your team align on which Automic system is in scope, who the customer-side contacts are, and which access (Automic user, database read access, log file access) the consultant will need. The SQL scripts that Tricise uses for the data collection phase are introduced and reviewed with your team, so you see exactly what they do before they touch your environment. This stage is deliberately light on effort on the customer side — typically one short scoping call and one access provisioning session.

STAGE 2 — DATA COLLECTION
Run the analysis tooling against the live system

With access in place, the Tricise consultant runs the analysis scripts against the Automic repository, collects log files from the Automation Engine and the AWI, captures configuration files and system variables, and extracts the historical runtime data needed for the performance evaluations. All data collection is read-only — nothing in the Automic system is modified during this stage. The collection is lightweight from a system-load perspective and comparable to a reporting query.

Depending on the size of the landscape, the data collection takes between half a day and a full day of consultant time.

STAGE 3 — ANALYSIS
Expert review across all eight evaluation areas

This is the heart of the engagement. The Tricise consultant works through the collected data and applies the evaluation catalogue described in the next section: system architecture and hardware, database and processes, system settings and incompatibilities, Automic system configuration, AE performance, performance towards the database, Webinterface configuration, and sizing. Each area produces findings, each finding is documented, and every finding is linked to a concrete recommendation with an effort and impact rating.

The analysis stage uses the largest share of the 5-day budget — typically two to three days — because this is where experience and pattern recognition turn raw data into useful conclusions.

STAGE 4 — WALKTHROUGH & REPORT
Joint result session and written report

The engagement ends with a joint walkthrough session in which the Tricise consultant presents the findings, the recommendations and the graphical evaluations to your team. The session is interactive: your team asks questions, challenges findings where useful, adds context the consultant could not see from outside, and confirms priorities. The outputs of this session feed into the final written report — a comprehensive document that your team can work from on its own in the weeks and months following the engagement.

The walkthrough is deliberately scheduled towards the end of the 5 days so there is no gap between “analysis done” and “findings handed over” — you leave the engagement with the understanding, not just the document.

What 5 days buys you. Five consecutive days of Broadcom-certified expert time, fully focused on one Automic system, with no parallel work and no competing priorities. The fixed-price model means you know exactly what the engagement costs up-front and exactly what you receive; scope creep is not a concern because the evaluation catalogue is standardised.

3 · Evaluation catalogue

What we analyse

The Automic System Health Check evaluation catalogue is the result of many years of Tricise consulting experience and covers more than 30 structured diagnostic checks grouped into eight main areas. The categories below describe the types of evaluation performed during the engagement — each one surfaces a different class of issue, and together they give a complete picture of the platform’s health. The exact metrics, thresholds and queries are part of Tricise’s consulting method and are applied by the consultant during the engagement; what you see here is what the findings look like, not how they are computed.

System architecture & hardware
Analysis of the current system architecture and verification that it still matches the requirements and the actual size of the Automic landscape. Many environments were sized years ago for a different workload profile and have quietly drifted — some are under-provisioned and starting to creak, others are significantly over-provisioned and wasting infrastructure. What it reveals: whether your architecture has outgrown its original design, whether sizing is still balanced against the current load, and whether it is ready for the next 2–3 years of growth.
Database & processes
Review of the database configuration, key Automation Engine processes, and their interconnections for efficiency and stability. Even though Automic is “only a user” of the database, the repository has its own parameters and best practices depending on the backend in use. What it reveals: database parameters that need tuning, process interconnections that introduce latency or contention, and configuration patterns that limit throughput during peak hours.
System settings & incompatibilities
Review of all Automic configuration files and system variables (UC_*) and identification of obsolete, deprecated or incompatible settings that have survived multiple version upgrades. Historically grown systems often carry parameters from versions below 12.x that are no longer used — for example FT_VERSION in UC_HOSTCHAR_*. What it reveals: legacy settings that can safely be cleaned up, incompatible components that could cause problems during the next upgrade, and configuration drift relative to current Broadcom recommendations.
Automic system configuration
Deep review and optimisation of the Automation Engine (AE) system configuration — the ucsrv.ini, server parameters, system-wide settings and routing rules that shape how the platform behaves. This is where many “we always did it that way” decisions hide, often with real impact on throughput, stability and behaviour under load. What it reveals: configuration parameters that should be adjusted for the current workload, settings that disagree with current Broadcom best practice, and opportunities for stability improvements.
AE performance & work processes
Performance analysis of the Automation Engine based on current and historical runtime data, including sizing of the work processes (WPs) and the ratio between the number of started work processes and actual system utilisation. More processes is not always better — an over-provisioned WP count wastes resources, while an under-provisioned one causes queueing under peak load. Server routines such as UCGENX_R/GENX are reviewed in the log files for patterns that may indicate objects using an elevated number of script lines, which can hint at performance issues. What it reveals: the optimal work-process count for the current workload, patterns in server routine execution that point to performance hotspots, and resource waste from over-provisioning.
Performance toward the database
Analysis of database performance and its interaction with the AE to identify bottlenecks and optimisation potential. The Automation Engine continuously interacts with the repository through many different types of queries, and some of them — for example SLUC, a select for update statement used for transaction handling — should always run fast. When they don’t, that is a signal. What it reveals: which database calls are slow, whether slow calls correlate with specific time windows, and whether the slowness correlates with elevated server routines. This includes review of the AE REORG — frequency, runtime, table sizes, index structure — because REORG configuration directly shapes long-term database performance.
Object activations & load distribution
Analysis of process activations over different time horizons — monthly, weekly, daily, hourly — to identify peak-load periods and uneven load distribution. Most landscapes have sharp peaks at predictable times (the 06:00 morning batch, the end-of-month closing, the Monday reporting wave) and these peaks drive both stability and sizing decisions. What it reveals: workload shape, smoothing and consolidation opportunities, the real peak your sizing needs to accommodate, and load hotspots that deserve a closer look.
Object analysis (variables, calendars, jobs, agents)
Structural review of the Automic object layer. Are there variables with hundreds of thousands of entries — or calendars with thousands of entries — that slow down loading and operations? Are there jobs that abort repeatedly? Do the connected agents match the current Automation Engine version, and are they fully compatible? What it reveals: oversized objects that need cleanup, unstable jobs that indicate deeper process issues, and agent version mismatches that will become a problem at the next upgrade.
AWI (Webinterface) analysis
Review of the Automic Web Interface regardless of whether Tomcat or Jetty is used as the launcher. The consultant checks the configuration.properties file for useful parameters, inspects Tomcat and Jetty log files for anomalies, verifies that sufficient resources are available for AWI operation, and evaluates the overall AWI architecture for optimisation potential. What it reveals: Webinterface parameters that improve usability and stability, resource sizing issues that affect user experience, and architectural improvements for better scalability.
More than 30 structured diagnostic checks. The eight areas above are the main families of evaluation; inside each one, the consultant applies multiple specific checks depending on the database backend, Automic version and landscape size. A typical Health Check engagement produces between 20 and 60 concrete findings, each linked to the area it came from and each with its own recommendation and priority.

4 · Deliverables

What you receive

At the end of the 5-day Automic System Health Check, you leave the engagement with a complete set of written and visual outputs that your team can work from on its own, without further Tricise involvement. Everything listed below is part of the fixed price — the Automic System Health Check has no hidden extras and no per-hour billing inside the agreed scope.

Detailed analysis report

A comprehensive written document containing all findings from the engagement, organised by evaluation area and with each finding described, quantified, and put into context. The top section is a structured executive summary readable by operations management; the body carries the technical detail operations and platform engineers need to act. The document is walked through jointly with the Tricise expert at the end of the engagement so everyone on your team understands what was found, why it matters, and what the next steps look like.

Practical recommendations & action plan

Every finding comes with a concrete recommendation: what to change, how much effort it represents, and what the expected impact on stability, performance or resource utilisation is. Recommendations are grouped into immediate quick wins, medium-term improvements, and longer-term architectural changes, so your team has a clear top-down list rather than a flat bag of suggestions.

Critical-adjustment highlights

A dedicated section highlighting adjustments that are critical for avoiding incidents, failed upgrades or stability regressions — the things that need attention first. This section exists because not every finding is equal: some are nice-to-have, and some are “fix this before the next maintenance window”.

Graphical evaluations

Visualisations of system load, object activations and database access patterns that make the findings tangible — the shape of the weekly peak, the distribution of database calls by duration, the top tables by row count, the top load drivers and most frequently triggered objects. Graphical output turns numbers into arguments your team can use internally when advocating for the prioritised changes.

Joint walkthrough session

The final walkthrough with the Tricise consultant is itself a deliverable. It is where your team sees the findings first-hand, clarifies context, asks questions and turns the written report into shared understanding. The session happens within the 5-day engagement, not as a separate follow-up, so nothing gets lost between analysis and handover.

Roadmap for future-proofing

A concise roadmap that pulls the recommendations together into a sequence: what to do first, what to do next, and what to plan for in the context of upcoming upgrades, scaling decisions or architectural changes. This is the deliverable that makes the Health Check useful months and years after the engagement itself.


5 · Technical scope

Technical scope & what we need from you

The Automic System Health Check is deliberately light on customer prerequisites. We do not ask your team to prepare data, run scripts, or assemble reports in advance. What we need is access to the system, availability during the walkthrough, and the contextual knowledge that only your operations team has. The total customer-side effort across the 5-day engagement is typically around half a day to one day — the rest is Tricise doing the work.

What is in scope

  • One Automic system — a single Automic Automation Engine system, which may include multiple clients if they share the same repository
  • 5 days of consultant effort — preparation, data collection, analysis, report, walkthrough
  • All eight evaluation areas — architecture, database, system settings, configuration, AE performance, database performance, activations, objects and AWI
  • More than 30 structured diagnostic checks — applied from the Tricise catalogue
  • Report, recommendations, graphical evaluations and walkthrough — the complete deliverable set

What your team needs to provide

  • Full access to the Automic system — an Automic user account with sufficient rights to inspect configuration, objects and system variables
  • Read access to the Automic repository — for running the Tricise analysis scripts against PostgreSQL, Oracle or SQL Server, depending on your backend
  • Access to relevant log files — Automation Engine logs, AWI logs (Tomcat or Jetty), and any recent incident logs that might be useful context
  • Participants for the walkthrough — typically one or two Automic administrators and, where applicable, a database administrator
  • A short scoping call before the engagement starts, to confirm access, contacts and timeline

Delivery mode

The Automic System Health Check is delivered remotely by default, which keeps costs predictable, enables fast scheduling, and works well for most customers. On-site delivery is available on request; travel expenses are then charged in addition to the fixed price, at actual cost. The analysis itself is identical in either delivery mode — remote delivery uses secure access and video conferencing for the walkthrough, and customers who have worked with Tricise remotely consistently report that the format works well for this type of engagement.

Out of scope

  • Implementation of the recommendations — the Health Check delivers the analysis and the plan; the implementation is typically handled by your own team or as a separate Tricise consulting engagement
  • Upgrade execution — an upgrade project is a separate engagement, though the Health Check is often a valuable input to it
  • Analysis of systems other than the one contracted in this engagement
  • Ongoing monitoring or managed services — see Tricise Application Managed Services (AMS) for that

6 · FAQ

Frequently asked questions

Click any question to expand the answer.

What is the Automic System Health Check and what does it do?

The Automic System Health Check is a fixed-price consulting package by Tricise that performs a structured, 5-day review of one Automic Automation Engine system. Broadcom-certified Tricise experts audit eight areas of the platform — system architecture, database, system settings, Automic configuration, AE performance, database performance, object activations, structural object layer, and the Webinterface (AWI) — applying more than 30 structured diagnostic checks from the Tricise evaluation catalogue. The engagement ends with a detailed report, prioritised recommendations, graphical evaluations, and a joint walkthrough session with your team, so you leave with both the document and the shared understanding needed to act on it.

How much does the Automic System Health Check cost?

The Automic System Health Check is offered at a fixed price of 8.000 € (excl. VAT) per Automic system. The fixed price covers the complete 5-day engagement: preparation, data collection, analysis across all eight evaluation areas, the written report, the prioritised recommendations, the graphical evaluations, and the joint walkthrough session. If the engagement is delivered on-site at your request, travel expenses are charged in addition at actual cost. There are no hidden extras and no per-hour billing inside the agreed scope.

How long does the Automic System Health Check take?

The Automic System Health Check is a 5-day engagement, usually delivered as five consecutive working days. The five days are a budget of Tricise expert time that covers preparation, data collection, analysis, report writing, and the final walkthrough — not a rigid day-by-day agenda. Depending on the landscape and the access situation, the distribution can vary slightly, but the total effort and the fixed price are constant. The effort on the customer side is very light by comparison — typically around half a day to one day in total, including the scoping call and the final walkthrough session.

Which Automic versions and database backends are supported?

The Automic System Health Check supports current Automic Automation Engine versions running on any of the three common repository databases: PostgreSQL, Oracle and Microsoft SQL Server. Tricise maintains analysis scripts and evaluation checks adapted to each backend and to the Automation Engine version in use. If your landscape uses a very old Automic version — something well below the current support window — let us know during scoping; we will tell you directly whether the standard engagement still fits or whether adjustments are needed.

What do I need to prepare before the Health Check starts?

The preparation on your side for an Automic System Health Check is intentionally minimal. You need to provide an Automic user account with enough rights to inspect configuration, objects and system variables; read access to the Automic repository database for the Tricise analysis scripts; and access to the relevant log files (Automation Engine, AWI, incident logs). You also nominate one or two Automic administrators — plus a database administrator where applicable — who join the short scoping call at the start and the walkthrough session at the end. No data pre-processing, no internal spreadsheets, no advance reports — the engagement is designed so that your team’s total involvement stays under one day.

Is the data collection intrusive? Does it affect production?

No. The data collection during the Automic System Health Check is entirely read-only — the Tricise analysis scripts query the Automic repository and the log files but never modify them. There is no code deployed into the Automation Engine, no agents installed, no changes to workflows or objects, no risk to running jobs. The scripts are reviewed jointly with your team before execution, so you see exactly what they do before they touch your environment. The load impact on the database is comparable to a reporting query.

How does this differ from the Automic Process Analysis & Optimisation package?

The two packages complement each other. The Automic System Health Check is an architecture and configuration review — it asks “is the platform healthy?” and looks at how the Automation Engine is set up, sized and tuned (database parameters, work processes, server routines, REORG, AWI configuration, object layer hygiene, agent compatibility). The Automic Process Analysis & Optimisation package is a workload review — it asks “are we running it efficiently?” and looks at what the system is actually executing, in what volume, at what frequency, and with what efficiency. Customers often book both, in either order, depending on which question is more pressing. A Health Check is typically the right starting point if stability, upgrades or sizing are the driving concern; Process Analysis fits better when the driver is workload efficiency or execution-based licensing.

Will you tell me exactly what to change, or just what’s wrong?

Every finding in the Automic System Health Check report comes with a concrete recommendation — what to change, how much effort it represents, and what the expected impact is. For configuration and parameter findings, the recommendation is typically at the level of “change this setting to that value because of X”. For architectural and sizing findings, the recommendation is at the pattern level, because the actual decision depends on business context only your team has. Either way, the report gives you actionable next steps, not just observations.

Can you also implement the recommendations?

Implementation is deliberately not part of the fixed-price Automic System Health Check — the package is scoped as an analysis and recommendation engagement. That said, Tricise can of course implement the recommendations as a follow-up project, scoped and priced separately based on what the Health Check produced. Many customers take the report first, work through the quick wins internally, and then engage Tricise for the medium-term and architectural items. Others hand everything back to us immediately. Both models work.

Do we keep the SQL scripts after the engagement?

The SQL analysis scripts and evaluation tooling are Tricise’s consulting method and remain with Tricise — they are the instrument we use to perform the Health Check, not a product deliverable. What you receive and keep is the complete analysis report, all findings, recommendations, graphical evaluations, and the action roadmap. If you want an ongoing, self-service monitoring capability rather than a point-in-time expert review, that is a different conversation — typically either the Automation CLI Action Pack, the Application Managed Services (AMS), or a periodically repeated Health Check engagement.

How often should we run a System Health Check?

There is no single right answer, but a useful rule of thumb is every 12 to 24 months, or whenever a significant event is coming up — a major Automic upgrade, a migration, a platform consolidation, a new workload onboarding, or a sharp growth in load. Between engagements, the recommendations from the previous Health Check should have been worked through, which means the next run starts from a cleaner baseline and produces more targeted findings. Customers who run a Health Check every 18 months and work through the top findings between runs typically see measurable long-term improvements in stability and performance.

Is the Health Check useful before an Automic upgrade?

Yes — this is one of the most common reasons customers book the Automic System Health Check. The evaluation catalogue includes obsolete parameters, legacy system variables, incompatible components and agent version mismatches, all of which are exactly the things that cause friction during an upgrade. A Health Check run 1–3 months before a planned upgrade gives you a clean list of what to fix up front so the actual upgrade window is much less exciting. Many Tricise customers pair the Health Check with an upcoming upgrade project for exactly this reason.

Ready to stabilise & future-proof your Automic system?
Book directly in the shop or talk to our experts about scope & scheduling.
Scroll to Top