Code: Select all
set VarA [open "/path/to/txt/file" r]
while {!([eof $VarA])} {
set TxtList [read $VarA]
}
set TxtList [list $TxtList]
close $VarA
Code: Select all
set fo [open "file.txt"]
while {[gets $fo line] >= 0} {
# work with $line here …
}
close $fo
You wouldn't keep it in a file. You would use the power of structured query language (sql) to handle storing and recall of it. This will fork off part of the process from the eggdrop having to do it all.Madalin wrote:I want to make a statistic script for words/lines/smilies written in a channel but i want to make it count only words from my language. I have found a list of over 70.000 words in my language and i want to know what the best way to compare lets say
"I want to compare this phrase with name.txt file"
with the .txt file that contains all the words in my language.
Thanks
Using eggdrop just eggdrop wont allow you to generate statistics off medium. With sql database handling it you can populate a website, an ftp login screen, an etc etc etc... You have unlimited potential of where this data can be put to use.Madalin wrote:I used array.. it takes almost 160 RAM (on tcl8.4) and like 60 RAM (tcl8.5) but it does the job as it should fast and reliable i dont want to make this using sql or anything else because i dont find that way usefull. The main channel has alot of traffic
Code: Select all
set count 0
set fh [open $file r]
while {![eof $fh]} {
set line [read $fh 35]
set ::nickindex([set n [string trim [string range $line 0 19]]]) $count
set ::nickwords($n) [string trim [string range $line 20 24]]
set ::nicklines($n) [string trim [string range $line 25 29]]
set ::nicksmilies($n) [string trim [string range $line 30 34]]
incr count
}
close $fh
Code: Select all
#write a nick that already exists to the file
#new nicks would of course go at the end
set index $::nickindex($nick)
set fh [open $file w]
seek $fh [expr {$index*35}]
# skip nick (20 bytes) we dont need to overwrite
seek $fh 20
puts $fh [format %5s $::nickwords($nick)][format %5s $::nicklines($nick)][format %5s $::nicksmilies($nick)]
Also there are TOP commands for lines/words/smilies<+ SRI> Statistics for _MaDaLiN_ are as follows: 1041 written lines containing 5775 words (5.5 per sentence) / 22710 points (3.9 per word) / 621 Smiles (0.6 per sentence) and 1932 words that do not belong to the Romanian language or was misspelled.
The problem you have with eating RAM is obvious.Madalin wrote:I never modify the file where the words are contained and whenever i restart i just load that file to set again the words. The userfile is different. As i said everything works ok so far.
Code: Select all
# This will eat the whatever the size of the file is in extra bytes.
# There are always 2 copies of the same thing eventually
# there will be two full copies of the same thing at the end
# doing it this way you must rewrite the entire file everytime
# you want to backup the array in memory. None of this is memory
# efficient below. The example is lousy.
#
# initalize and load the user stats file
set fh [open $file r]
set file [read $fh]
close $fh
foreach line [split $file \n] {
set line [split $line \n]
set ::nickwords([set n [lindex $line 0]]) [lindex $line 1]
set ::nicklines($n) [lindex $line 2]
set ::nicksmilies($n) [lindex $line 3]
}
# This will eat at least 35 bytes of RAM extra.
# There is only a single record in memory at a time.
# doing it this way I can mimize my rewrites when needing
# to save any part of the file. I can also save the entire file
# at any time if I want equally as easy. This example r0x.
#
# initalize and load the user stats file
set count 0
set fh [open $file r]
while {1} {
set line [read -nonewline $fh 35]
if {[eof $fh]} { close $fh ; break }
set ::nickindex([set n [string trim [string range $line 0 19]]]) $count
set ::nickwords($n) [string trim [string range $line 20 24]]
set ::nicklines($n) [string trim [string range $line 25 29]]
set ::nicksmilies($n) [string trim [string range $line 30 34]]
incr count
}