Also, fyi, I'm getting closer to putting out another release. This will bring back the navigation/tooltip functionality (in more robust form) that has been disabled by default. You'll be able to right click on something and "goto" where its define. You'll be able to quickly navigate around a script using that combobox at the top of the script view. When you hover the mouse over a symbol, it will show you the parameters it takes (in case of a function call), the value it has (in case of a define, say). And finally, there will be autocomplete as you type (e.g. "intellisense"). This should make it a lot easier to write scripts.
The second big piece of feature creep I'm working on (which will hopefully pan out) is to support audio resources for the messages (e.g. the base36 audio and sync resources). I'm trying to make it as easy as possible. Basically you'll be able to select a message the message view and import a .wav file for it; or, to make it even easier, press a record button and speak into your computer's microphone, and have that saved as an audio resource for that message entry. The issues I'm trying to workout revolve around how the resource management happens. These audio resource volumes (resouce.aud) can be large (~300MB) and that doesn't play well with how I do things now.
The LSL6 interpreter included in the SCI1.1 template game does actually support "talkies". I've verified this by putting in .map and "audio36" patch files. With no extra coding (except for changing a global flag), the corresponding message resource is now "spoken" in game. One caveat is that the talkers in the game are either all text or all spoken. With the template game code as is, if an audio resource is missing there is no fallback to text. It shouldn't be too hard to write code to handle this differently.
I'm not sure what to do about sync resource (this is lipsync data). To me, it seems really difficult and tedious to generate this data (it's basically a list of "tick count"/cel pairs - the cels being the various shapes of the mouth of the talker), and I'm not sure how to expose it in an editor. At the very least, I can let people enter this data manually. That's tedious, but I think trying to match up audio with a particular view cel is tedious anyway. I doubt many people would do this. Certainly it would be possible to try to analyze the waveform and map it to mouth shapes. If someone wants to write a tool that does this... lol
And finally, I need to get documentation done. I'm hoping to produce this automatically off "doc comments" in the scripts. This will be just a basic API reference, I guess. Hopefully I can rely on the community to produce proper tutorials (as some of you have already done).