Thanks for releasing all of these scripts incith. I've been using your google one for a while now and after finding these forums earlier this week i've started using a few more of your tcls. Nice work
I think there may be a problem with how I changed the binds.. for some reason unknown to me, public messages have stopped working due to modifying how I bind the binds slightly.
If after you do a .die your bot's xrl is not working in channels please download 2.5a.
testebr wrote:if char '?' is present in url, show this error.
Sorry about that, I had removed them because I didn't think it was necessary anymore. (them = ? and &). I have released version 2.5b to address this problem on my website.
Sigh.. slennox is gonna slaughter me one day with all the cancellations I do.
I have released 2.5c to hopefully address the above issues. % codes should always be converted now thanks to the above proc in a lengthen. Thanks, speechles.
Just for laughs and perhaps others use (stealing is fine in this case), here is the corresponding urlencode.
The mundane [string maps { }] are neccessary to keep numerics and other required symbols from being %percentualized/encoded. The $type is only useful for doing this on more than one encoding style. Sites such as wikipedia (type 1 in code above) that require almost everything encoded with periods . , while every other site including google uses percents % .
I decided against its use in xrl for now because I do not want to store urls as http%##%##%##www blah blah (yes I am too lazy to look up the codes for ://, still waking up), and is also why I removed % from being converted to %25 in 2.5d, I tested an ebay link stuffed with %'s and it still returned fine, the urlDecode making it even prettier now on a !lengthen.
There may still be some obscure URL/characters that will break xrl in the future, I didn't test curly braces for example, but I want to believe it's 99% good now.
incith wrote:This would be really useful in a lot of places.
I decided against its use in xrl for now because I do not want to store urls as http%##%##%##www blah blah (yes I am too lazy to look up the codes for ://, still waking up), and is also why I removed % from being converted to %25 in 2.5d, I tested an ebay link stuffed with %'s and it still returned fine, the urlDecode making it even prettier now on a !lengthen.
There may still be some obscure URL/characters that will break xrl in the future, I didn't test curly braces for example, but I want to believe it's 99% good now.
Since your merely shortening url's your prolly closer to 99.999%. Your regexp is defining what gets sent off for url query (cleansing before querying). Which is neccessary because your anticipating links within context (hard to do). I only provided the urlencode function because the urldecode was mentioned in this thread. Imagined they could share mention in this thread and hopefully act as a reference in the future for someone suffering similar issues.. Theft is encouraged, this is a sharing community.
Note: this helps get around the accented character problem when dealing with multiple languages as input.
if demond doesnt mind id love to see this incorporated into his rss feed script. google news links are HUGE since they have referal links in them so it can be quite spamming to have them in there. i may try to modify the two to be one but since im not a tcl coder it wont be to easy