I would love to have a script that I could use to parse log files "live".
I'm not even sure where to really start with how to read a log file basically every second and have it return the new lines of text that are put into the log file.
As far as parsing the text, I can handle that much.. I just dont know how to get the first part.
With the 1.6.20 version, the LOG binding was introduced. This binding can be set up to trigger on any log-message matching the given "channel text" pattern:
(47) LOG (stackable)
bind log <flags> <mask> <proc>
proc-name <level> <channel> <message>
Description: triggered whenever a message is sent to a log. The mask is
matched against "channel text".
The level argument to the proc will contain the level(s) the message
is sent to, or '*' if the message is sent to all log levels at once.
If the message wasn't sent to a specific channel, channel will be set
to '*'.
Module: core
If you'd rather observe a given file on the filesystem (not necessarily generated by your eggdrop), you'd have to resort to asynchronous file IO using the fileevent mechanism in tcl.
proc openFile {file} {
set fd [open $file "RDONLY"]
fconfigure $fd -blocking 0
fileevent $fd readable [list readFile $fd]
}
proc readFile {file} {
if {[eof $file]} {
close $file
} else {
gets $file line
//Do something with $line
puthelp "PRIVMSG #somechannel :Read \"$line\" from log"
}
}
openFile is used to open the file and set up the file-events. readFile will then be called by the event whenever there is a new line to be read in the (opened) file.
I guess what I'm getting at here, is I want to actively monitor a log file so that each time something new is wrote the tcl script catches it.. but doesn't care about previous entries..
and.. the log file must remain intact like a normal log file.
Hopefully I'm not confusing you or asking too much, as I really do appreciate your offerings here. You're very kind!
Well, as I said, this is asynchronous IO. Which means the code is event-driven and non-blocking by nature. You simply call the openFile proc once to open the file and set up the fileevent, and the event engine will call the readFile proc with appropriate arguments whenever there is something readable within the file.
That said, I do now notice that I wrote the code for socket communications, where eof would indicate that the socket had been closed. For a "tail-like" behavior, the readFile proc would be re-written like below:
proc readFile {file} {
if {[gets $file line] >= 0} {
#gets returned a complete line from the file, do some further processing
puthelp "PRIVMSG #somechannel :Read \"$line\" from logfile"
}
}
Or actually, scrap that..
The file will always be considered readable once it reaches the end of file, resulting in the event being triggered over and over (blocking eggdrop).
There are a few recipes for doing this using timers though; the following code tries to read one line every second (with some modifications, it could be made to try and read as many lines as possible once every second ).
proc readFile {fileId} {
#(Try to) read one line
set bytes [gets $fileId line]
if {$bytes >= 0} {
#We've got some data, append it to the buffer.
append ::tailBuffer($fileId) $line
if {![eof $fileId]} {
#Verify that we got a line with EOL; do something intelligent with the line of text
puts stdout "Read line: $::tailBuffer($fileId)"
#Clear the buffer..
set ::tailBuffer($fileId) ""
}
}
#Start another 1000 ms timer to call readFile again
after 1000 [list readFile $fileId]
}
proc openFile {file} {
#open file for read-only access
set fileId [open $file RDONLY]
#Move to the end of the file after opening, optional:
seek $fileId 0 end
#Configure the file for non-blocking access
fconfigure $fileId -blocking 0
#Prepare a buffer for lines missing EOL
set ::tailBuffer($fileId) ""
#Start a 1000 ms timer to call readFile
after 1000 [list readFile $fileId]
}
You call "openFile" to open and start monitoring a file. Yes, it will continue to check the file roughly once every second for new content - assuming there's no error condition within "readFile" (in it's current state, there isn't much that could cause an error to occur).
Obviously, you'll have to modify "readFile" to implement whatever parsing you've set your mind to (I simply had the script write the read line to stdout for demonstration purposes).
The actual log file i'm using is 100% real the user account the bot is running under has full read access to the log file and works fine from ssh console when i do something like: tail -f /path/to/filename.log
When I execute the above command on the partyline of the bot the output is the following: