Python爬蟲必備技巧詳細(xì)總結(jié)
自定義函數(shù)
import requests from bs4 import BeautifulSoup headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'} def baidu(company): url = 'https://www.baidu.com/s?rtt=4&tn=news&word=' + company print(url) html = requests.get(url, headers=headers).text s = BeautifulSoup(html, 'html.parser') title=s.select('.news-title_1YtI1 a') for i in title: print(i.text) # 批量調(diào)用函數(shù) companies = ['騰訊', '阿里巴巴', '百度集團(tuán)'] for i in companies: baidu(i)
批量輸出多個(gè)搜索結(jié)果的標(biāo)題
結(jié)果保存為文本文件
import requests from bs4 import BeautifulSoup headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'} def baidu(company): url = 'https://www.baidu.com/s?rtt=4&tn=news&word=' + company print(url) html = requests.get(url, headers=headers).text s = BeautifulSoup(html, 'html.parser') title=s.select('.news-title_1YtI1 a') fl=open('test.text','a', encoding='utf-8') for i in title: fl.write(i.text + '\n') # 批量調(diào)用函數(shù) companies = ['騰訊', '阿里巴巴', '百度集團(tuán)'] for i in companies: baidu(i)
寫入代碼
fl=open('test.text','a', encoding='utf-8') for i in title: fl.write(i.text + '\n')
異常處理
for i in companies: try: baidu(i) print('運(yùn)行成功') except: print('運(yùn)行失敗')
寫在循環(huán)中 不會(huì)讓程序停止運(yùn)行 而會(huì)輸出運(yùn)行失敗
休眠時(shí)間
import time for i in companies: try: baidu(i) print('運(yùn)行成功') except: print('運(yùn)行失敗') time.sleep(5)
time.sleep(5)
括號(hào)里的單位是秒
放在什么位置 則在什么位置休眠(暫停)
爬取多頁(yè)內(nèi)容
百度搜索騰訊
切換到第二頁(yè)
去掉多多余的
https://www.baidu.com/s?wd=騰訊&pn=10
分析出
https://www.baidu.com/s?wd=騰訊&pn=0 為第一頁(yè)
https://www.baidu.com/s?wd=騰訊&pn=10 為第二頁(yè)
https://www.baidu.com/s?wd=騰訊&pn=20 為第三頁(yè)
https://www.baidu.com/s?wd=騰訊&pn=30 為第四頁(yè)
..........
代碼
from bs4 import BeautifulSoup import time headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'} def baidu(c): url = 'https://www.baidu.com/s?wd=騰訊&pn=' + str(c)+'0' print(url) html = requests.get(url, headers=headers).text s = BeautifulSoup(html, 'html.parser') title=s.select('.t a') for i in title: print(i.text) for i in range(10): baidu(i) time.sleep(2)
到此這篇關(guān)于Python爬蟲必備技巧詳細(xì)總結(jié)的文章就介紹到這了,更多相關(guān)Python 爬蟲技巧內(nèi)容請(qǐng)搜索本站以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持本站!
版權(quán)聲明:本站文章來(lái)源標(biāo)注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請(qǐng)保持原文完整并注明來(lái)源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非www.sddonglingsh.com所屬的服務(wù)器上建立鏡像,否則將依法追究法律責(zé)任。本站部分內(nèi)容來(lái)源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來(lái),僅供學(xué)習(xí)參考,不代表本站立場(chǎng),如有內(nèi)容涉嫌侵權(quán),請(qǐng)聯(lián)系alex-e#qq.com處理。