2000字范文,分享全网优秀范文,学习好帮手!
2000字范文 > Python3 爬取今日头条(模拟ajax请求)

Python3 爬取今日头条(模拟ajax请求)

时间:2022-02-02 12:10:23

相关推荐

Python3 爬取今日头条(模拟ajax请求)

Python3 爬取今日头条(模拟ajax请求)

注:本文是跟据博主崔庆才的博客来写的,不单单是思路值得学习,还有代码规范更值得去学习。路漫漫其修远兮,吾将上下而求索。

新建一个config.py文件,如下

MONGO_URL = 'localhost'MONGO_DB = 'toutiao'MONGO_TABLE = 'toutiao'GROUP_START = 1GROUP_END = 20KEYWORD = '街拍'

然后创建一个spider.py文件,内容如下:i

import requestsfrom urllib.parse import urlencodefrom requests import RequestExceptionimport jsonfrom json import JSONDecodeErrorfrom bs4 import BeautifulSoupimport re,osimport pymongofrom config import * #导入之前创建的config.py文件from hashlib import md5from multiprocessing import Poolclient = pymongo.MongoClient(MONGO_URL ,connect=False)db = client[MONGO_DB]#获取首页Json数据,记为1def get_page_index(offset, keyword):data = {'offset': offset,'format': 'json','keyword': keyword,'autoload': 'true','count': '20','cur_tab': '3','from': 'gallery'}headers = {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)'}url = '/search_content/?' + urlencode(data)try:response = requests.get(url, headers = headers)if response.status_code == 200:return response.textreturn Noneexcept RequestException:print('请求出错')return None#解析1的Json数据,获取所需界面URL地址def parse_page_index(html):try:data = json.loads(html)if data and 'data' in data.keys(): #判断data是否存在for i in data.get('data'):# print(i.get('article_url'))yield i.get('article_url')except JSONDecodeError:pass#处理从1中解析得来的数据,记为2def get_page_detail(url):headers = {'User-Agent': 'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)'}try:response = requests.get(url, headers = headers)if response.status_code == 200:return response.textreturn Noneexcept RequestException:print('请求详情页出错',url)return None#解析2def parse_page_detail(html ,url):headers = {'User-Agent': 'Mozilla / 5.0(X11;Ubuntu;Linuxx86_64;rv: 59.0) Gecko / 0101Firefox / 59.0'}soup = BeautifulSoup(html, 'lxml')title = soup.select('title')[0].get_text()print(title)images_pattern = pile('var gallery = (.*?);',re.S)result = re.search(images_pattern, html)if result:data = json.loads(result.group(1))#print(result.group(1))if data and 'sub_images' in data.keys():sub_images = data.get('sub_images')images = ['http:' + i.get('url') for i in sub_images]print(images)for images in images:download_image(images,headers,title)return {'title':title,'images': images,'url':url,}#将解析的数据保存到Mongo数据库中def save_to_mongo(result):if db[MONGO_TABLE].insert(result):print('存储到MongoDB成功',result)return Truereturn False#下载图片def download_image(url, headers ,title):print("正在下载:", url)try:response = requests.get(url, headers = headers)if response.status_code == 200 :save_image(response.content ,title)return Noneexcept RequestException:print('请求图片出错', url)return Nonedef save_image(content, title):if not os.path.exists(title): #如果文件夹不存在就创建os.mkdir(title)file_path = '{0}/{1}.{2}'.format(title,md5(content).hexdigest(), 'jpg')if not os.path.exists(file_path):with open(file_path, 'wb') as f:f.write(content)f.close()# offset 为要爬取的页码, KEYWORD为检索内容def main(offset):html = get_page_index(offset, KEYWORD)for url in parse_page_index(html):html = get_page_detail(url)if html:# print(html)result = parse_page_detail(html,url)#print(result)if result: save_to_mongo(result)if __name__ == '__main__':groups = [x * 20 for x in range(GROUP_START, GROUP_END + 1)]pool = Pool()pool.map(main,groups)

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。