2000字范文,分享全网优秀范文,学习好帮手!
2000字范文 > 网络爬虫js逆向解决网站登录RSA加密问题 不使用selenium如何实现登录 session维持登

网络爬虫js逆向解决网站登录RSA加密问题 不使用selenium如何实现登录 session维持登

时间:2018-12-18 22:06:02

相关推荐

网络爬虫js逆向解决网站登录RSA加密问题 不使用selenium如何实现登录 session维持登

记录中大网校破解登录后爬取的方法:

案例请求地址:中大网校会员中心-登陆入口-中大网校

使用工具:打码平台(超级鹰)

分析请求:

分析此请求,得知没有data,保持状态登录需要服务器知道是这个用户对应请求的相应验证码,所以要用session来维护状态

get_img ="/apis//common/getImageCaptcha"session = requests.session()session.headers = {'Referer' : "/login?url=http%3A%2F%%2F","Content-Type": "application/json;charset=UTF-8","User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36"}session.get("/login?url=http%3A%2F%%2F")time.sleep(0.5)image_b = session.post(get_img).json()["data"].split(",")[1]print(image_b)with open("photo.png","wb") as f:f.write(base64.b64decode(image_b))

获取到后验证码保存下来

接着接入超级鹰,识别获得验证码

超级鹰文件:

#!/usr/bin/env python# coding:utf-8import requestsfrom hashlib import md5class Chaojiying_Client(object):def __init__(self, username, password, soft_id):self.username = usernamepassword = password.encode('utf8')self.password = md5(password).hexdigest()self.soft_id = soft_idself.base_params = {'user': self.username,'pass2': self.password,'softid': self.soft_id,}self.headers = {'Connection': 'Keep-Alive','User-Agent': 'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)',}def PostPic(self, im, codetype):"""im: 图片字节codetype: 题目类型 参考 /price.html"""params = {'codetype': codetype,}params.update(self.base_params)files = {'userfile': ('ccc.jpg', im)}r = requests.post('/Upload/Processing.php', data=params, files=files, headers=self.headers)return r.json()def PostPic_base64(self, base64_str, codetype):"""im: 图片字节codetype: 题目类型 参考 /price.html"""params = {'codetype': codetype,'file_base64':base64_str}params.update(self.base_params)r = requests.post('/Upload/Processing.php', data=params, headers=self.headers)return r.json()def ReportError(self, im_id):"""im_id:报错题目的图片ID"""params = {'id': im_id,}params.update(self.base_params)r = requests.post('/Upload/ReportError.php', data=params, headers=self.headers)return r.json()if __name__ == '__main__':chaojiying = Chaojiying_Client('超级鹰用户名', '超级鹰用户名的密码', '938422')#用户中心>>软件ID 生成一个替换 96001im = open('a.jpg', 'rb').read()#本地图片文件路径 来替换 a.jpg 有时WIN系统须要//print (chaojiying.PostPic(im, 9004))#1902 验证码类型 官方网站>>价格体系 3.4+版 print 后要加()#print chaojiying.PostPic(base64_str, 1902) #此处为传入 base64代码

调用获取验证码

im = open("photo.png","rb").read()chaojiying = Chaojiying_Client('cjy账号', 'cjy密码', '938422')#用户中心>>软件ID 生成一个替换 96001#本地图片文件路径 来替换 a.jpg 有时WIN系统须要//image_code = chaojiying.PostPic(im, '1004')["pic_str"]print(image_code)

接着研究登录的xhr请求:

明显看出密码经过了加密,只要解决密码的加密,带上验证码,账号就能成功登录

分析搜索:

搜索password寻找加密入口,encrypt顾名思义加密,所以进入查看

由此明显库看出是一个密钥,而JSEncrypt是前端常用的加密库,接下来就是查看用的什么类型加密。

并且i.encryptFn传入的参数是pwd+ress.data,而data的数据经过研究是另一个请求获得的数据见下图

所以登录前,需要请求一次getTime获得data数据

由此处可以看见e就是密码,还是明文格式,接着进入encrypt查看

由上图此处明显可以看到是RSA加密,接着再debug后能看到d已经是密文格式

接下来python使用RSA加密:

from Crypto.PublicKey import RSAfrom Crypto.Cipher import PKCS1_v1_5def dispose(s):rsa_key = "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDA5Zq6ZdH/RMSvC8WKhp5gj6Ue4Lqjo0Q2PnyGbSkTlYku0HtVzbh3S9F9oHbxeO55E8tEEQ5wj/+52VMLavcuwkDypG66N6c1z0Fo2HgxV3e0tqt1wyNtmbwg7ruIYmFM+dErIpTiLRDvOy+0vgPcBVDfSUHwUSgUtIkyC47UNQIDAQAB"rsa_keys = RSA.importKey(base64.b64decode(rsa_key))rsa_new = PKCS1_v1_5.new(rsa_keys)cipher = rsa_new.encrypt(s.encode("utf-8"))return base64.b64encode(cipher).decode("utf-8")

上述代码就是python调用RSA的加密方式

维护登录状态:

在此处可以看见加载进入cookie中的数据,跟着将数据加入session的cookie中即可保持状态

知道加密流程后整理代码:

import jsonimport requestsimport timeimport base64from Crypto.PublicKey import RSAfrom Crypto.Cipher import PKCS1_v1_5from encryption.中大网校登录ras.chaojiying_Python.chaojiying import Chaojiying_Clientget_img ="/apis//common/getImageCaptcha"session = requests.session()session.headers = {'Referer' : "/login?url=http%3A%2F%%2F","Content-Type": "application/json;charset=UTF-8","User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36"}session.get("/login?url=http%3A%2F%%2F")time.sleep(0.5)image_b = session.post(get_img).json()["data"].split(",")[1]print(image_b)with open("photo.png","wb") as f:f.write(base64.b64decode(image_b))datatime = session.post("/apis//common/getTime").json()["data"]print(datatime)im = open("photo.png","rb").read()chaojiying = Chaojiying_Client('cjy账号', 'cjy密码', '938422')#用户中心>>软件ID 生成一个替换 96001#本地图片文件路径 来替换 a.jpg 有时WIN系统须要//image_code = chaojiying.PostPic(im, '1004')["pic_str"]print(image_code)username = you idpassword = "you**"def dispose(s):rsa_key = "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDA5Zq6ZdH/RMSvC8WKhp5gj6Ue4Lqjo0Q2PnyGbSkTlYku0HtVzbh3S9F9oHbxeO55E8tEEQ5wj/+52VMLavcuwkDypG66N6c1z0Fo2HgxV3e0tqt1wyNtmbwg7ruIYmFM+dErIpTiLRDvOy+0vgPcBVDfSUHwUSgUtIkyC47UNQIDAQAB"rsa_keys = RSA.importKey(base64.b64decode(rsa_key))rsa_new = PKCS1_v1_5.new(rsa_keys)cipher = rsa_new.encrypt(s.encode("utf-8"))return base64.b64encode(cipher).decode("utf-8")data = {"imageCaptchaCode": image_code,"password": dispose(password+datatime),"userName": username,}#遇到Request Payload,传入就必须要用json.dumps()resp = session.post("/apis//login/passwordLogin",data = json.dumps(data))dic = resp.json()['data']print(dic)session.cookies['UserCookieName'] = dic['userName']session.cookies['OldUsername2'] = dic['userNameCookies']session.cookies['OldUsername'] = dic['userNameCookies']session.cookies['OldPassword'] = dic['passwordCookies']session.cookies['UserCookieName_'] = dic['userName']session.cookies['OldUsername2_'] = dic['userNameCookies']session.cookies['OldUsername_'] = dic['userNameCookies']session.cookies['OldPassword_'] = dic['passwordCookies']session.cookies['autoLogin'] = "null"session.cookies['userInfo'] = json.dumps(dic)session.cookies['token'] = dic['token']print(session)

允许结果:

接着即可在下面编写对该网站任意地方进行解析与请求

网络爬虫js逆向解决网站登录RSA加密问题 不使用selenium如何实现登录 session维持登录状态请求爬取

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。