Could someone tell me why i keep getting "Invalid idx" with this bit of script, please, Many thanx
proc postsite {nick idx uhost arg} {
global botname chan templist lasttempsite isopen tempnumber
if {$isopen == 0} {
putserv "NOTICE $nick : Please Read the Topic !!, $botname is Currently Closed, Try Again Later."
return 0
} else {
puthelp "PRIVMSG $chan :razz:ossible new Site <$tempnumber> by $nick"
putserv "NOTICE $nick : Please Wait While an OP Checks your Site Number <$tempnumber>"
putdcc $idx "OPS Check this site Pleez"
}
}
I understand what your saying & have looked at grep but its too big just to look in a text file to see if the url that i want to add is already in the text file, but thank you for your help though, i appreciate it
I think were at differenty posts here i was looking at the Grep Scripts.
what i need is ...
I have a list of urls in a .txt file
i want users to be able to add urls to the .txt file via my tcl script but i want the tcl to check to see if the submitted url is in the list already if so, say in the channel that its a dupe, if it isnt in the .txt file then add it, is that possible in tcl ??
I thought this would work but it wont find the dupe although the dupe is in the templist.txt
set file templist
set found 0
set fs [open $file r]
while {![eof $fs]} {
gets $fs line
if {$line == $nick} { set found 1 }
}
close $fs
if {$found} {
puthelp "PRIVMSG $chan : was found!"
} else {
puthelp "PRIVMSG $chan : was not found."
}
Thanx again
<font size=-1>[ This Message was edited by: [Nero] on 2002-06-04 14:50 ]</font>
I see. Well, that's less Tcl's fault than the algorithm you use to scan through the file. The Tcl file api (gets, read, puts, seek, tell, eof, etc) is pretty much the same as any other language's, including C's. I doubt his file is 500 meg though, so the algorithm isn't very important
yes, then you could use a sequential read, eval, readnext algorithm, but I guarantee it won't be anywhere near the speed of a pure c implemenation, simply due to the overhead of the interpreter running in a while loop. Using external pure c modules (such as grep) in my mind is always preferable than using the scripted alternative. Yes, in this case, the performance increase is probably negligable, but thats no reason not to code it the most efficient way possible (well, technically, the most efficient way would be to build a simple grep as a tcl module, but thats going a little to extreme just to avoid the overhead of exec)
Ah I see, you mean Tcl's file performance is crap. When you said "file handling" I thought you meant the api. Yeah, Tcl is obviously much slower than C.
But anyway, unless you have thousands of url's to check, my way is still faster than exec grep. It also won't interfere with process limits on your shell. And it's always gonna be more portable
On the performance issue, I think one should note that grep is also going to be slow on a 500 meg file. When you get to the point of having such a large file, you should use a real database like mysql, not a flat text file and grep.