Famous quotes from DITA Europe

To conclude the 2014 flashback series, let me share with you a few notes from the 10th anniversary of DITA Europe in Munich.

The main reason I love attending DITA Europe is the relaxed atmosphere, which encourages an intense exchange between attendees and even spontaneous debates during the sessions. If you already attended, you know what I mean and you’ll enjoy remembering the following quotes. If not, try and guess… who said what?

DITA Europe 2014 collage

DITA Usage Infographic Late 2014 (IXIASOFT)

Post-conference: The DITA-OT Day
DITA OT Day 2014 collage

[Update] Answers:

  1. JoAnn Hackos
  2. Jang Graat
  3. Dawn Stevens
  4. Eliot Kimber

Mit Legos in Stuttgart

Auf dem Weg nach XML-Prag, habe ich versucht meine Artikel-Reihe über die letzten Konferenzen weiter zu schreiben. Ich schreibe diesen Artikel auf Deutsch, weil es gerade um die tekom Jahrestagung 2014 in Stuttgart geht. Ich hatte die Tagung in den vorigen Jahren schon besucht, diesmal aber habe ich meinen ersten Vortrag und einen Workshop auf Deutsch und nacheinander sogar gehalten.

Mein Workshop – Das DITA-Implementierungsprojekt – und der Vortrag – Verstehen Sie DITA-Architektur? – haben erst am dritten Tag stattgefunden. Nichtdestotrotz waren sie gut besucht. Ich hätte mir gewünscht, dass die Workshopräume besser isoliert würden und jeder Teilnehmer einen Platz am Tisch hätte, sodass man bei den Übungen mitmachen konnte… So mussten wir Vieles überspringen, aber die Gruppe war trotzdem aktiv und stellte gute Fragen.

DITA Implementierung - Folien

Gleich danach dürfte ich mehr über DITA-Architektur im riesigen Plenum-Raum berichten… was so komisch auf mich wirkte, dass ich fühlte wie mein roter Faden dahinschwindet. Die Blokade war glücklicherweise nicht von Dauer, da gleich in der nechsten Woche habe ich noch einen Vortrag gehalten und es lief alles prima. Für den Teil über DITA-Architektur hatte ich eigentlich ebenso einen Workshop vorgeschlagen, dürfte aber diesmal nur einen Vortrag daraus machen. Vielleicht klappt es mit dem Workshop bei der Jahrestagung 2015 🙂 So würde ich meinem Publikum durch konkreten Beispielen und Übungen beibringen, was ich ihnen noch schulde.

DITA Architektur Folien

Darüberhinaus war ich in Stuttgart zum ersten Mal als Aussteller mit meinem neuen Arbeitgeber PANTOPIX dabei. Wir haben Freunde und Messebesucher eingeladen, mit uns über ihren Datenmodellen zu reden und dabei mit den Lego-Steinen zu spielen. Außer einer Reihe von Firmenlogos, entstanden ein paar einzigartige Objekte aus der Zusammenarbeit der Standbesucher. Danke fürs Mitmachen!

PANTOPIX Legosteine

Spice up Your DITA Workflows – Flashback tekomRS

Part two of the flashback series recalls my prezi “about… DITA, of course” as @georgebina said, at the tekom Europe Roadshow in Bucharest.

The RoadShow story

After George has shown their efficient recipes for using DITA along the software documentation lifecycle at Syncro, I just suggested a few more spices to make a writer’s life a bit easier.

Sometimes it feels like the only constant in a technical writer’s work is change. Whether in agile or waterfall, project teams tend to place documentation towards the end of the process, or leave them at least one iteration behind. So after documentation is reviewed, approved, integrated in the kit and sent to translation, you notice the final seasoning: “minor” changes in the product right before the release. A modified label here, a moved button there… are exceptions to the “code freeze”.

Spice up your DITA workflows
But change is good, and you’re already at great advantage if using DITA. Indeed, you can make your documentation flexible and agile, by adding a few scripts to your DITA projects, to keep up with the changes in the products you are documenting.

Let’s see some examples for frequent updating of:
– strings in the user interface
– reference code
– application screenshots
– in-line code documentation


In the case of GUI strings, you can use keys in DITA, so that you wouldn’t have to worry about changes in all the topics. You just update the values in a keymap, or even use different keymaps in the same project, for different versions of the product.

<step>
   <cmd>Under <option keyref="mnu_sound-sch"/> select
     <uicontrol keyref="btn_nosound"/>.</cmd>
</step>

The special spice would be generating the keymaps on the fly, with a script like “ini2dita”, “csv2dita”, “xls2dita”… Talk to your developers and see how you can integrate the docs with the localization strings.


Keeping sources like the code samples, or 3rd-party licenses, in separate files, allows you to integrate them in your DITA content with coderef, increasing the flexibility of your projects.

<stepxmp>
   <codeblock outputclass="language-ini">
      <coderef href="codesample.bat"/>   
   </codeblock>
</stepxmp>

If you are using screenshots in your documentation, it is also best practice to refer to them by keys. Thus you can have separate sets of images for various product flavours and languages.

<stepresult>
   <image keyref="scn_sound-settings"/>
</stepresult>

Imagine you could even have the screenshots generated automatically. Wouldn’t that save a great deal of time? Tools like AutoHotkey and WinSpy might help.


Another advantage with DITA is you can apply an XSLT transformation of the in-line code documentation written by developers, like for example Python docstrings in rST, and even do the round-trip between rST and DITA formats. This method allows developers to keep writing in their favourite environment and you can even supply edited versions back to them in the same form. More about this in April at DITA NA in Chicago.

With these few seasoning ideas for your DITA workflows, you can save a lot of time and frustration when updating documentation projects, and you increase their accuracy and consistency. Give it a try!

Implementing DITA – Workshop Flashback TCUK14

Finally catching up with my posts, starting a series of scrapbook-like articles about the events I attended in the past few months. I should be quick, as more events are coming soon…

Brighton Royal Pavilion

September 2014 in Brighton was the first time I attended TCUK. Met some old friends, made some new ones, ate good sushi, attended interesting sessions and I had a great group to work with in the DITA workshop.

The workshop theme was “Implementing DITA – The work beyond the business case”, aiming to briefly present each implementation phase, to understand what the project team would have to go through and what the project plan would look like.

Have you been told that implementing DITA or migrating to a structured authoring environment would take at least two years and a six-digit amount from your budget? That might be true, but you should understand what lies beyond the business case, in order to sustain your team effectively.

Let’s walk through the phases of the DITA implementation project together and see what the project plan contains, what new skills your team requires, which tasks you can prepare in-house, and how DITA tools and architecture can work best for you.

We’ll discuss and practice:

  • the implementation project plan
  • content inventory and analysis
  • information modelling
  • reuse strategy
  • DITA architecture
  • DITA templates
  • changes in the documentation workflow with new team roles

After attending this workshop, you will be ready to present the components of a DITA implementation package to your team. Only after getting their commitment and motivation, you can kick off a successful implementation.

Looking forward to TCUK15, here is my “storified” workshop report. Many thanks to the restless and enthusiastic John Kearney (@JK1440) who live-twitted the event.

Storify: Implementing DITA (Workshop TCUK14)

Click the photo to view the story of “Implementing DITA – The work beyond the business case” on Storify

DITA is here to stay

or better said… to grow with us

It is seldom that I hear news about XML/DITA application in the DACH region (the German-speaking countries), so I was glad to attend the TIM Users Conference in Constance last week. TIM is the XML authoring and content management system developed by Fischer Computertechnik.

The two-day conference was indeed an intensive knowledge exchange between TIM users, the FCT team, and their partners. As I expected, most of the attendees and speakers are using TIM in manufacturing and machine building enterprises, as well as on-site support services. Adobe FrameMaker and Microsoft Word are still broadly used as editors, but it is encouraging to hear about well established German tools joining the DITA world.

Last year I was reading in the DITA community about concerns that the standard was not being supported enough in Germany, but I must say it does not look that bad to me. As long as DITA is on academic curricula and major conference programs, no one can say it’s being discouraged. New and well-known, local CMS providers are offering and promoting DITA modules. When corporations as large as SAP are adopting a standard, the rest of the world has to follow.

As Prof. Wolfgang Ziegler was saying at tekom 2013 in Wiesbaden during his talk about information portals, DITA and XML have been around for 20 years… we don’t even talk about reasons for doing XML anymore – we just do it! (“Macht man einfach!“)

Prof. Sissi Closs is also talking regularly about DITA and single sourcing. In Constance she was presenting DITA Information Architecture as a relatively new and absolutely necessary discipline, functioning as a continuous, agile process of information management.

Dr. Walter Fischer was declaring himself convinced by the advantages DITA brings to the technical communication, especially considering what the Internet of Things triggers in the emerging Industry 4.0 age.

In workshops, presentations, lightning talks, over coffee, football 😉 WM public viewing or on a boat trip on Lake Constance, “TIM-players” from Austria, Switzerland and Germany were in agreement: We need more collaboration with industry partners, when it comes to exchanging content and integrating tools. Partner content providers are hiding behind a copyright clause and would only send a PDF or a protected copy of an illustration, instead of sharing the sources with integrators of their products and documentation.

Machines are talking to machines and to humans, yet humans are still reluctant to comply to standards and to exchange information. XML is everywhere around us anyway, so why do we have to wait and be forced to switch at the last moment, when it’s obvious we need to work with a standard like DITA, to collaborate and manage information?

Darwin is in the DITA name for a reason: it’s an XML standard that’s evolving with us.

Happy DIT’ing!

Link management in DITA

One of the power features of the DITA environment is the linking mechanism. Combined with keys and conditioning, it gives single-source projects unexpected dimensions.

You can steer your linking strategy from the main ditamap of the deliverable in a clean, non-intrusive manner. The topics remain “neutral”, reusable in any number of projects. The content can be reassembled to fit different use cases, audiences, media, just by manipulating the topics hierarchy and the relationship table in the ditamap, without actually touching the topics. And another piece of good news: no more manually maintaining those miniTOCs…

The following examples are created in oXygen Editor v16, and generated using the PDF and the WebHelp scenarios. By integrating your customized plugins for the DITA Open Toolkit, you can further adjust the look and feel of the publications.

Based on the topics nesting in the ditamap, and eventually the collection-type attribute, your publishing scenario can generate links between parent and children topics, links between sibling topics, and links to next, previous and prerequisite topics in a sequence.

The result also depends on how you set the parameter for related links in the publishing scenario or build script. To display the collection links, set the args.rellinks parameter to "all". If set to "nofamily", only the link to the parent topic is displayed, apart from the links generated from the relationship table.

You should also consider, whether you want your plugin to show or hide the short descriptions, and to group the links under a generic Related links label (makes sense to me), or to display the links by topic type: Related concepts, Related tasks, Related information, etc.

Let’s see a few examples of publishing from a ditamap with topics A, B, C, nested under an overview topic. Remember: You add all these links without modifying the topics source!

Example 1: Simple nesting

Map sample with simple nesting

Output: In the PDF, the overview topic (parent) has links to the nested topics (children), and the children link to the parent.

PDF from simple nesting

Example 2: A family collection

Replace line 5 in the map with:
<topicref href="source/ov_topic.xml" format="dita" type="topic" scope="local" collection-type="family">

Output: The parent has links to the children; the children link to the parent and to each other.

PDF with family collection

Example 3: A sequence collection

Replace line 5 in the map with:
<topicref href="source/ov_topic.xml" format="dita" type="topic" scope="local" collection-type="sequence">

Output: Let’s look at the WebHelp output, this time. The parent has links to the children in an ordered list; the children have links to the parent, to the previous and to the next sibling.

WebHelp output of sequence parent
WebHelp sequence child

Example 4: A required task in the sequence collection

Add the importance attribute to a required task in a sequence (line 6).
Sample with required task in sequence

Output: The second and third topics contain a link to the first task, as prerequisite.

WebHelp sequence with prerequisite

To manage the linking between topics that are not hierarchically linked, create a relationship table at the end of the main ditamap of your deliverable. Remember, the map hierarchy also creates certain links, so you should avoid duplicates.

There are more types of relationship tables (reltable), but I prefer staying with the two-column reltable, which allows me to think of links as unidirectional or bidirectional arrows between the two cells of each row.

Apart from linking to DITA topics (concept, task, reference, etc.), you can use the reltable to add links from a topic to external sources, such as web pages or other PDFs.

Example 5: Relationship table

A ditamap with topics A, B, C, D, nested under an overview topic, followed by topics X, Y, Z, and a relationship table.

ditamap with reltable

The example reltable contains three rows:

  • The first row relates topic A to topics X and Y.
  • The second row relates topics C and D with topics X and Z, while X and Z are also grouped in a family (line 30), so they would also link to each other.
  • The last row defines a sourceonly relation (line 36) from topic B to topics A and Z, which means A and Z will not link back to B.

I marked the related links in the published topics in red, green, purple and yellow, so you can identify them in the stylized reltable on the right.

PDF topics with links from reltable
To take advantage of these linking mechanisms, the authors in your team have to agree on writing topics for reuse, organizing the cross-references in the reltable and using keys for inline references. I’ll be writing about linking via keys in a future post.
Happy DIT’ing!

How do you reuse paragraphs?

Die Qual der Wahl“… it’s hard to choose, sometimes. There are many ways to apply reuse in DITA, or in other authoring environments. What would be the best way for a team to manage reused content, in case of similar topics with series of list items, table rows or just paragraphs?

The usual discourse in lectures, books and webinars I’ve seen, although using various names, compares methods like inclusion (referencing, insets) and conditioning.

Two reuse methods

To achieve the same results (publishing deliverables A, B, C), you can work with methods such as:

  • managing a warehouse topic, from where each author retrieves components by ID (conref or conkeyref, in DITA) and maintains individual topics for each project.
  • managing one common topic for all authors and applying conditions (attributes) on items specific to each project.

The first method obviously adds one intermediate stage, requiring each author to maintain parallel versions of the same topic. Apart from that, it depends on your policy, if you would see it as an advantage, that the authors can rearrange the items and even add extra content around them.

Although the second method brings the advantage of maintaining only one source, it adds more complexity at taxonomy level. The combinations of attribute values and the number of ditaval files or entries in the subject scheme map can become overwhelming.

There are other possibilities, of course. For example, to consider topic1.xml the master topic and to work with topic2.xml and topic3.xml as variants of 1, which means changes in topic1 would also influence the variant topics.

Thank you for your input.