Python實戰(zhàn)快速上手BeautifulSoup庫爬取專欄標題和地址
BeautifulSoup庫快速上手
安裝
pip install beautifulsoup4 # 上面的安裝失敗使用下面的 使用鏡像 pip install beautifulsoup4 -i https://pypi.tuna.tsinghua.edu.cn/simple
使用PyCharm的命令行
解析標簽
from bs4 import BeautifulSoup import requests url='https://blog.csdn.net/weixin_42403632/category_11076268.html' headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'} html=requests.get(url,headers=headers).text s=BeautifulSoup(html,'html.parser') title =s.select('h2') for i in title: print(i.text)
第一行代碼:導(dǎo)入BeautifulSoup庫
第二行代碼:導(dǎo)入requests
第三、四、五行代碼:獲取url的html
第六行代碼:激活BeautifulSoup庫 'html.parser'設(shè)置解析器為HTML解析器
第七行代碼:選取所有<h2>
標簽
解析屬性
BeautifulSoup庫 支持根據(jù)特定屬性解析網(wǎng)頁元素
根據(jù)class值解析
from bs4 import BeautifulSoup import requests url='https://blog.csdn.net/weixin_42403632/category_11076268.html' headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'} html=requests.get(url,headers=headers).text s=BeautifulSoup(html,'html.parser') title =s.select('.column_article_title') for i in title: print(i.text)
根據(jù)ID解析
from bs4 import BeautifulSoup html='''<div class="crop-img-before"> <img src="" alt="" id="cropImg"> </div> <div id='title'> 測試成功 </div> <div class="crop-zoom"> <a href="javascript:;" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="bt-reduce">-</a><a href="javascript:;" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="bt-add">+</a> </div> <div class="crop-img-after"> <div class="final-img"></div> </div>''' s=BeautifulSoup(html,'html.parser') title =s.select('#title') for i in title: print(i.text)
多層篩選
from bs4 import BeautifulSoup html='''<div class="crop-img-before"> <img src="" alt="" id="cropImg"> </div> <div id='title'> 456456465 <h1>測試成功</h1> </div> <div class="crop-zoom"> <a href="javascript:;" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="bt-reduce">-</a><a href="javascript:;" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="bt-add">+</a> </div> <div class="crop-img-after"> <div class="final-img"></div> </div>''' s=BeautifulSoup(html,'html.parser') title =s.select('#title') for i in title: print(i.text) title =s.select('#title h1') for i in title: print(i.text)
提取a標簽中的網(wǎng)址
title =s.select('a') for i in title: print(i['href'])
實戰(zhàn)-獲取博客專欄 標題+網(wǎng)址
from bs4 import BeautifulSoup import requests import re url='https://blog.csdn.net/weixin_42403632/category_11298953.html' headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'} html=requests.get(url,headers=headers).text s=BeautifulSoup(html,'html.parser') title =s.select('.column_article_list li a') for i in title: print((re.findall('原創(chuàng).*?\n(.*?)\n',i.text))[0].lstrip()) print(i['href'])
到此這篇關(guān)于Python實戰(zhàn)快速上手BeautifulSoup庫爬取專欄標題和地址的文章就介紹到這了,更多相關(guān)Python BeautifulSoup庫內(nèi)容請搜索本站以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持本站!
版權(quán)聲明:本站文章來源標注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請保持原文完整并注明來源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非www.sddonglingsh.com所屬的服務(wù)器上建立鏡像,否則將依法追究法律責任。本站部分內(nèi)容來源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來,僅供學(xué)習(xí)參考,不代表本站立場,如有內(nèi)容涉嫌侵權(quán),請聯(lián)系alex-e#qq.com處理。