Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix auto catch #30

Open
wants to merge 23 commits into
base: master
Choose a base branch
from
Open

Fix auto catch #30

wants to merge 23 commits into from

Conversation

alzamer2
Copy link
Collaborator

upgrade the code so it can catch the link of USA-only Anime

alzamer2 added 23 commits April 8, 2017 19:49
+fix the USA unblock option by replacing the old site because its no
longer working
+add Node.js Requires modules because some platform need it to login
now it work with usa blocked by using rss feed
+remove usa blocker improvvment because it not working
thanks shinji257 for fix
auto-cash now work in this order if one failed go to other:
1-RSS
2-Site Html
3-Site Html+USA unblocker
remove extra lines
…all seasons

can some give feedback  for this fix
+ Total size calculater
+downladed size update
+download speed
+multi download connection to boosth speed
thanks to Bandido06
fix the huge files appending
the speed calculation was in bytes while the unit as in bit
its fixed now
This reverts commit 78bbc26.
fix the code so that it can catch USA-only anime

ps fixed the code so only the catch is here
disable rss feed for now as the Crunchyroll rss feed is not complete
@rcyclope
Copy link
Contributor

rcyclope commented Aug 9, 2017

rss flux indicate just last season

if you want ddl all serie you nedd use this =

`def autocatch():
import requests, pickle
with open('cookies') as f:
cookies = requests.utils.cookiejar_from_dict(pickle.load(f))
session = requests.session()
session.cookies = cookies
del session.cookies['c_visitor']
data = {'Referer': 'http://crunchyroll.com/', 'Host': 'www.crunchyroll.com',
'User-Agent': 'Mozilla/5.0 Windows NT 6.1; rv:26.0 Gecko/20100101 Firefox/26.0'}
aList = []
print 'indicate the url : '
url=raw_input()
cr=open("crunchy.html","w")
mykey = session.get(url).content
cr.write(mykey)
Z=open("crunchy.html","r")
take = open("queue.txt", "w")

aList_t = re.findall('<a href="/(.+?)" title=', mykey)
for i in aList_t:
    aList.append('http://www.crunchyroll.com/'+i)
if aList != []:
    take = open("queue.txt", "w")
    take.write(u'#the any line that has hash before the link will be skiped\n')
    for i in aList:
        print >> take, i
    take.close()    
    subprocess.call('notepad.exe '+"queue.txt")`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants