mirror of
https://github.com/NanmiCoder/MediaCrawler.git
synced 2026-02-09 08:31:03 +08:00
Compare commits
15 Commits
codex/repl
...
feature/co
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ae7955787c | ||
|
|
a9dd08680f | ||
|
|
cae707cb2a | ||
|
|
906c259cc7 | ||
|
|
3b6fae8a62 | ||
|
|
a72504a33d | ||
|
|
e177f799df | ||
|
|
1a5dcb6db7 | ||
|
|
2c9eec544d | ||
|
|
d1f73e811c | ||
|
|
2d3e7555c6 | ||
|
|
3c5b9e8035 | ||
|
|
e6f3182ed7 | ||
|
|
2cf143cc7c | ||
|
|
eb625b0b48 |
84
README.md
84
README.md
@@ -1,3 +1,5 @@
|
||||
# 🔥 MediaCrawler - 自媒体平台爬虫 🕷️
|
||||
|
||||
<div align="center" markdown="1">
|
||||
<sup>Special thanks to:</sup>
|
||||
<br>
|
||||
@@ -12,8 +14,6 @@
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
# 🔥 MediaCrawler - 自媒体平台爬虫 🕷️
|
||||
|
||||
<div align="center">
|
||||
|
||||
<a href="https://trendshift.io/repositories/8291" target="_blank">
|
||||
@@ -239,26 +239,59 @@ uv run main.py --platform xhs --lt qrcode --type search --save_data_option db
|
||||
|
||||
[🚀 MediaCrawlerPro 重磅发布 🚀!更多的功能,更好的架构设计!](https://github.com/MediaCrawlerPro)
|
||||
|
||||
|
||||
### 💬 交流群组
|
||||
- **微信交流群**:[点击加入](https://nanmicoder.github.io/MediaCrawler/%E5%BE%AE%E4%BF%A1%E4%BA%A4%E6%B5%81%E7%BE%A4.html)
|
||||
|
||||
### 📚 其他
|
||||
- **常见问题**:[MediaCrawler 完整文档](https://nanmicoder.github.io/MediaCrawler/)
|
||||
- **爬虫入门教程**:[CrawlerTutorial 免费教程](https://github.com/NanmiCoder/CrawlerTutorial)
|
||||
- **新闻爬虫开源项目**:[NewsCrawlerCollection](https://github.com/NanmiCoder/NewsCrawlerCollection)
|
||||
---
|
||||
|
||||
### 💰 赞助商展示
|
||||
|
||||
<a href="https://www.swiftproxy.net/?ref=nanmi">
|
||||
<img src="docs/static/images/img_5.png">
|
||||
<br>
|
||||
Swiftproxy - 90M+ 全球高质量纯净住宅IP,注册可领免费 500MB 测试流量,动态流量不过期!
|
||||
> 专属折扣码:**GHB5** 立享九折优惠!
|
||||
</a>
|
||||
|
||||
<br>
|
||||
<br>
|
||||
|
||||
<a href="https://h.wandouip.com">
|
||||
<img src="docs/static/images/img_8.jpg">
|
||||
<br>
|
||||
豌豆HTTP自营千万级IP资源池,IP纯净度≥99.8%,每日保持IP高频更新,快速响应,稳定连接,满足多种业务场景,支持按需定制,注册免费提取10000ip。
|
||||
豌豆HTTP自营千万级IP资源池,IP纯净度≥99.8%,每日保持IP高频更新,快速响应,稳定连接,满足多种业务场景,支持按需定制,注册免费提取10000ip。
|
||||
</a>
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<a href="https://tikhub.io/?utm_source=github.com/NanmiCoder/MediaCrawler&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad">
|
||||
<img style="border-radius:20px" width="500" alt="TikHub IO_Banner zh" src="docs/static/images/tikhub_banner_zh.png">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
[TikHub](https://tikhub.io/?utm_source=github.com/NanmiCoder/MediaCrawler&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad) 提供超过 **700 个端点**,可用于从 **14+ 个社交媒体平台** 获取与分析数据 —— 包括视频、用户、评论、商店、商品与趋势等,一站式完成所有数据访问与分析。
|
||||
|
||||
通过每日签到,可以获取免费额度。可以使用我的注册链接:[https://user.tikhub.io/users/signup?referral_code=cfzyejV9](https://user.tikhub.io/users/signup?referral_code=cfzyejV9&utm_source=github.com/NanmiCoder/MediaCrawler&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad) 或使用邀请码:`cfzyejV9`,注册并充值即可获得 **$2 免费额度**。
|
||||
|
||||
[TikHub](https://tikhub.io/?utm_source=github.com/NanmiCoder/MediaCrawler&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad) 提供以下服务:
|
||||
|
||||
- 🚀 丰富的社交媒体数据接口(TikTok、Douyin、XHS、YouTube、Instagram等)
|
||||
- 💎 每日签到免费领取额度
|
||||
- ⚡ 高成功率与高并发支持
|
||||
- 🌐 官网:[https://tikhub.io/](https://tikhub.io/?utm_source=github.com/NanmiCoder/MediaCrawler&utm_medium=marketing_social&utm_campaign=retargeting&utm_content=carousel_ad)
|
||||
- 💻 GitHub地址:[https://github.com/TikHubIO/](https://github.com/TikHubIO/)
|
||||
|
||||
---
|
||||
<p align="center">
|
||||
<a href="https://app.nstbrowser.io/account/register?utm_source=official&utm_term=mediacrawler">
|
||||
<img style="border-radius:20px" alt="NstBrowser Banner " src="docs/static/images/nstbrowser.jpg">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
Nstbrowser 指纹浏览器 — 多账号运营&自动化管理的最佳解决方案
|
||||
<br>
|
||||
多账号安全管理与会话隔离;指纹定制结合反检测浏览器环境,兼顾真实度与稳定性;覆盖店铺管理、电商监控、社媒营销、广告验证、Web3、投放监控与联盟营销等业务线;提供生产级并发与定制化企业服务;提供可一键部署的云端浏览器方案,配套全球高质量 IP 池,为您构建长期行业竞争力
|
||||
<br>
|
||||
[点击此处即刻开始免费使用](https://app.nstbrowser.io/account/register?utm_source=official&utm_term=mediacrawler)
|
||||
<br>
|
||||
使用 NSTBROWSER 可获得 10% 充值赠礼
|
||||
|
||||
|
||||
|
||||
### 🤝 成为赞助者
|
||||
@@ -266,32 +299,9 @@ Swiftproxy - 90M+ 全球高质量纯净住宅IP,注册可领免费 500MB 测
|
||||
成为赞助者,可以将您的产品展示在这里,每天获得大量曝光!
|
||||
|
||||
**联系方式**:
|
||||
- 微信:`yzglan`
|
||||
- 微信:`relakkes`
|
||||
- 邮箱:`relakkes@gmail.com`
|
||||
|
||||
|
||||
## 🤝 社区与支持
|
||||
|
||||
### 💬 交流群组
|
||||
- **微信交流群**:[点击加入](https://nanmicoder.github.io/MediaCrawler/%E5%BE%AE%E4%BF%A1%E4%BA%A4%E6%B5%81%E7%BE%A4.html)
|
||||
|
||||
### 📚 文档与教程
|
||||
- **在线文档**:[MediaCrawler 完整文档](https://nanmicoder.github.io/MediaCrawler/)
|
||||
- **爬虫教程**:[CrawlerTutorial 免费教程](https://github.com/NanmiCoder/CrawlerTutorial)
|
||||
|
||||
|
||||
# 其他常见问题可以查看在线文档
|
||||
>
|
||||
> 在线文档包含使用方法、常见问题、加入项目交流群等。
|
||||
> [MediaCrawler在线文档](https://nanmicoder.github.io/MediaCrawler/)
|
||||
>
|
||||
|
||||
# 作者提供的知识服务
|
||||
> 如果想快速入门和学习该项目的使用、源码架构设计等、学习编程技术、亦或者想了解MediaCrawlerPro的源代码设计可以看下我的知识付费栏目。
|
||||
|
||||
[作者的知识付费栏目介绍](https://nanmicoder.github.io/MediaCrawler/%E7%9F%A5%E8%AF%86%E4%BB%98%E8%B4%B9%E4%BB%8B%E7%BB%8D.html)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## ⭐ Star 趋势图
|
||||
|
||||
@@ -282,7 +282,7 @@ If this project helps you, please give a ⭐ Star to support and let more people
|
||||
Become a sponsor and showcase your product here, getting massive exposure daily!
|
||||
|
||||
**Contact Information**:
|
||||
- WeChat: `yzglan`
|
||||
- WeChat: `relakkes`
|
||||
- Email: `relakkes@gmail.com`
|
||||
|
||||
|
||||
|
||||
@@ -282,7 +282,7 @@ uv run main.py --platform xhs --lt qrcode --type search --save_data_option db
|
||||
¡Conviértase en patrocinador y muestre su producto aquí, obteniendo exposición masiva diariamente!
|
||||
|
||||
**Información de Contacto**:
|
||||
- WeChat: `yzglan`
|
||||
- WeChat: `relakkes`
|
||||
- Email: `relakkes@gmail.com`
|
||||
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ SAVE_LOGIN_STATE = True
|
||||
# 是否启用CDP模式 - 使用用户现有的Chrome/Edge浏览器进行爬取,提供更好的反检测能力
|
||||
# 启用后将自动检测并启动用户的Chrome/Edge浏览器,通过CDP协议进行控制
|
||||
# 这种方式使用真实的浏览器环境,包括用户的扩展、Cookie和设置,大大降低被检测的风险
|
||||
ENABLE_CDP_MODE = False
|
||||
ENABLE_CDP_MODE = True
|
||||
|
||||
# CDP调试端口,用于与浏览器通信
|
||||
# 如果端口被占用,系统会自动尝试下一个可用端口
|
||||
|
||||
@@ -13,16 +13,23 @@
|
||||
# 每天爬取视频/帖子的数量控制
|
||||
MAX_NOTES_PER_DAY = 1
|
||||
|
||||
# 指定B站视频ID列表
|
||||
# 指定B站视频URL列表 (支持完整URL或BV号)
|
||||
# 示例:
|
||||
# - 完整URL: "https://www.bilibili.com/video/BV1dwuKzmE26/?spm_id_from=333.1387.homepage.video_card.click"
|
||||
# - BV号: "BV1d54y1g7db"
|
||||
BILI_SPECIFIED_ID_LIST = [
|
||||
"BV1d54y1g7db",
|
||||
"https://www.bilibili.com/video/BV1dwuKzmE26/?spm_id_from=333.1387.homepage.video_card.click",
|
||||
"BV1Sz4y1U77N",
|
||||
"BV14Q4y1n7jz",
|
||||
# ........................
|
||||
]
|
||||
|
||||
# 指定B站用户ID列表
|
||||
# 指定B站创作者URL列表 (支持完整URL或UID)
|
||||
# 示例:
|
||||
# - 完整URL: "https://space.bilibili.com/434377496?spm_id_from=333.1007.0.0"
|
||||
# - UID: "20813884"
|
||||
BILI_CREATOR_ID_LIST = [
|
||||
"https://space.bilibili.com/434377496?spm_id_from=333.1007.0.0",
|
||||
"20813884",
|
||||
# ........................
|
||||
]
|
||||
|
||||
@@ -11,15 +11,27 @@
|
||||
# 抖音平台配置
|
||||
PUBLISH_TIME_TYPE = 0
|
||||
|
||||
# 指定DY视频ID列表
|
||||
# 指定DY视频URL列表 (支持多种格式)
|
||||
# 支持格式:
|
||||
# 1. 完整视频URL: "https://www.douyin.com/video/7525538910311632128"
|
||||
# 2. 带modal_id的URL: "https://www.douyin.com/user/xxx?modal_id=7525538910311632128"
|
||||
# 3. 搜索页带modal_id: "https://www.douyin.com/root/search/python?modal_id=7525538910311632128"
|
||||
# 4. 短链接: "https://v.douyin.com/drIPtQ_WPWY/"
|
||||
# 5. 纯视频ID: "7280854932641664319"
|
||||
DY_SPECIFIED_ID_LIST = [
|
||||
"7280854932641664319",
|
||||
"7202432992642387233",
|
||||
"https://www.douyin.com/video/7525538910311632128",
|
||||
"https://v.douyin.com/drIPtQ_WPWY/",
|
||||
"https://www.douyin.com/user/MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE?from_tab_name=main&modal_id=7525538910311632128",
|
||||
"7202432992642387233",
|
||||
# ........................
|
||||
]
|
||||
|
||||
# 指定DY用户ID列表
|
||||
# 指定DY创作者URL列表 (支持完整URL或sec_user_id)
|
||||
# 支持格式:
|
||||
# 1. 完整创作者主页URL: "https://www.douyin.com/user/MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE?from_tab_name=main"
|
||||
# 2. sec_user_id: "MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE"
|
||||
DY_CREATOR_ID_LIST = [
|
||||
"MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE",
|
||||
"https://www.douyin.com/user/MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE?from_tab_name=main",
|
||||
"MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE"
|
||||
# ........................
|
||||
]
|
||||
|
||||
@@ -10,11 +10,22 @@
|
||||
|
||||
# 快手平台配置
|
||||
|
||||
# 指定快手视频ID列表
|
||||
KS_SPECIFIED_ID_LIST = ["3xf8enb8dbj6uig", "3x6zz972bchmvqe"]
|
||||
# 指定快手视频URL列表 (支持完整URL或纯ID)
|
||||
# 支持格式:
|
||||
# 1. 完整视频URL: "https://www.kuaishou.com/short-video/3x3zxz4mjrsc8ke?authorId=3x84qugg4ch9zhs&streamSource=search"
|
||||
# 2. 纯视频ID: "3xf8enb8dbj6uig"
|
||||
KS_SPECIFIED_ID_LIST = [
|
||||
"https://www.kuaishou.com/short-video/3x3zxz4mjrsc8ke?authorId=3x84qugg4ch9zhs&streamSource=search&area=searchxxnull&searchKey=python",
|
||||
"3xf8enb8dbj6uig",
|
||||
# ........................
|
||||
]
|
||||
|
||||
# 指定快手用户ID列表
|
||||
# 指定快手创作者URL列表 (支持完整URL或纯ID)
|
||||
# 支持格式:
|
||||
# 1. 创作者主页URL: "https://www.kuaishou.com/profile/3x84qugg4ch9zhs"
|
||||
# 2. 纯user_id: "3x4sm73aye7jq7i"
|
||||
KS_CREATOR_ID_LIST = [
|
||||
"https://www.kuaishou.com/profile/3x84qugg4ch9zhs",
|
||||
"3x4sm73aye7jq7i",
|
||||
# ........................
|
||||
]
|
||||
|
||||
@@ -21,8 +21,12 @@ XHS_SPECIFIED_NOTE_URL_LIST = [
|
||||
# ........................
|
||||
]
|
||||
|
||||
# 指定用户ID列表
|
||||
# 指定创作者URL列表 (支持完整URL或纯ID)
|
||||
# 支持格式:
|
||||
# 1. 完整创作者主页URL (带xsec_token和xsec_source参数): "https://www.xiaohongshu.com/user/profile/5eb8e1d400000000010075ae?xsec_token=AB1nWBKCo1vE2HEkfoJUOi5B6BE5n7wVrbdpHoWIj5xHw=&xsec_source=pc_feed"
|
||||
# 2. 纯user_id: "63e36c9a000000002703502b"
|
||||
XHS_CREATOR_ID_LIST = [
|
||||
"63e36c9a000000002703502b",
|
||||
"https://www.xiaohongshu.com/user/profile/5eb8e1d400000000010075ae?xsec_token=AB1nWBKCo1vE2HEkfoJUOi5B6BE5n7wVrbdpHoWIj5xHw=&xsec_source=pc_feed",
|
||||
"63e36c9a000000002703502b",
|
||||
# ........................
|
||||
]
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
|
||||
|
||||
|
||||
扫描下方我的个人微信,备注:pro版本(如果图片展示不出来,可以直接添加我的微信号:yzglan)
|
||||
扫描下方我的个人微信,备注:pro版本(如果图片展示不出来,可以直接添加我的微信号:relakkes)
|
||||
|
||||

|
||||
|
||||
|
||||
BIN
docs/static/images/nstbrowser.jpg
vendored
Normal file
BIN
docs/static/images/nstbrowser.jpg
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 580 KiB |
BIN
docs/static/images/relakkes_weichat.jpg
vendored
BIN
docs/static/images/relakkes_weichat.jpg
vendored
Binary file not shown.
|
Before Width: | Height: | Size: 223 KiB After Width: | Height: | Size: 230 KiB |
BIN
docs/static/images/tikhub_banner.png
vendored
Normal file
BIN
docs/static/images/tikhub_banner.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 750 KiB |
BIN
docs/static/images/tikhub_banner_zh.png
vendored
Normal file
BIN
docs/static/images/tikhub_banner_zh.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 758 KiB |
@@ -7,6 +7,6 @@
|
||||
## 加群方式
|
||||
> 备注:github,会有拉群小助手自动拉你进群。
|
||||
>
|
||||
> 如果图片展示不出来或过期,可以直接添加我的微信号:yzglan,并备注github,会有拉群小助手自动拉你进群
|
||||
> 如果图片展示不出来或过期,可以直接添加我的微信号:relakkes,并备注github,会有拉群小助手自动拉你进群
|
||||
|
||||

|
||||
@@ -41,6 +41,7 @@ from var import crawler_type_var, source_keyword_var
|
||||
from .client import BilibiliClient
|
||||
from .exception import DataFetchError
|
||||
from .field import SearchOrderType
|
||||
from .help import parse_video_info_from_url, parse_creator_info_from_url
|
||||
from .login import BilibiliLogin
|
||||
|
||||
|
||||
@@ -103,8 +104,14 @@ class BilibiliCrawler(AbstractCrawler):
|
||||
await self.get_specified_videos(config.BILI_SPECIFIED_ID_LIST)
|
||||
elif config.CRAWLER_TYPE == "creator":
|
||||
if config.CREATOR_MODE:
|
||||
for creator_id in config.BILI_CREATOR_ID_LIST:
|
||||
await self.get_creator_videos(int(creator_id))
|
||||
for creator_url in config.BILI_CREATOR_ID_LIST:
|
||||
try:
|
||||
creator_info = parse_creator_info_from_url(creator_url)
|
||||
utils.logger.info(f"[BilibiliCrawler.start] Parsed creator ID: {creator_info.creator_id} from {creator_url}")
|
||||
await self.get_creator_videos(int(creator_info.creator_id))
|
||||
except ValueError as e:
|
||||
utils.logger.error(f"[BilibiliCrawler.start] Failed to parse creator URL: {e}")
|
||||
continue
|
||||
else:
|
||||
await self.get_all_creator_details(config.BILI_CREATOR_ID_LIST)
|
||||
else:
|
||||
@@ -362,11 +369,23 @@ class BilibiliCrawler(AbstractCrawler):
|
||||
utils.logger.info(f"[BilibiliCrawler.get_creator_videos] Sleeping for {config.CRAWLER_MAX_SLEEP_SEC} seconds after page {pn}")
|
||||
pn += 1
|
||||
|
||||
async def get_specified_videos(self, bvids_list: List[str]):
|
||||
async def get_specified_videos(self, video_url_list: List[str]):
|
||||
"""
|
||||
get specified videos info
|
||||
get specified videos info from URLs or BV IDs
|
||||
:param video_url_list: List of video URLs or BV IDs
|
||||
:return:
|
||||
"""
|
||||
utils.logger.info("[BilibiliCrawler.get_specified_videos] Parsing video URLs...")
|
||||
bvids_list = []
|
||||
for video_url in video_url_list:
|
||||
try:
|
||||
video_info = parse_video_info_from_url(video_url)
|
||||
bvids_list.append(video_info.video_id)
|
||||
utils.logger.info(f"[BilibiliCrawler.get_specified_videos] Parsed video ID: {video_info.video_id} from {video_url}")
|
||||
except ValueError as e:
|
||||
utils.logger.error(f"[BilibiliCrawler.get_specified_videos] Failed to parse video URL: {e}")
|
||||
continue
|
||||
|
||||
semaphore = asyncio.Semaphore(config.MAX_CONCURRENCY_NUM)
|
||||
task_list = [self.get_video_info_task(aid=0, bvid=video_id, semaphore=semaphore) for video_id in bvids_list]
|
||||
video_details = await asyncio.gather(*task_list)
|
||||
@@ -568,18 +587,30 @@ class BilibiliCrawler(AbstractCrawler):
|
||||
extension_file_name = f"video.mp4"
|
||||
await bilibili_store.store_video(aid, content, extension_file_name)
|
||||
|
||||
async def get_all_creator_details(self, creator_id_list: List[int]):
|
||||
async def get_all_creator_details(self, creator_url_list: List[str]):
|
||||
"""
|
||||
creator_id_list: get details for creator from creator_id_list
|
||||
creator_url_list: get details for creator from creator URL list
|
||||
"""
|
||||
utils.logger.info(f"[BilibiliCrawler.get_creator_details] Crawling the detalis of creator")
|
||||
utils.logger.info(f"[BilibiliCrawler.get_creator_details] creator ids:{creator_id_list}")
|
||||
utils.logger.info(f"[BilibiliCrawler.get_all_creator_details] Crawling the details of creators")
|
||||
utils.logger.info(f"[BilibiliCrawler.get_all_creator_details] Parsing creator URLs...")
|
||||
|
||||
creator_id_list = []
|
||||
for creator_url in creator_url_list:
|
||||
try:
|
||||
creator_info = parse_creator_info_from_url(creator_url)
|
||||
creator_id_list.append(int(creator_info.creator_id))
|
||||
utils.logger.info(f"[BilibiliCrawler.get_all_creator_details] Parsed creator ID: {creator_info.creator_id} from {creator_url}")
|
||||
except ValueError as e:
|
||||
utils.logger.error(f"[BilibiliCrawler.get_all_creator_details] Failed to parse creator URL: {e}")
|
||||
continue
|
||||
|
||||
utils.logger.info(f"[BilibiliCrawler.get_all_creator_details] creator ids:{creator_id_list}")
|
||||
|
||||
semaphore = asyncio.Semaphore(config.MAX_CONCURRENCY_NUM)
|
||||
task_list: List[Task] = []
|
||||
try:
|
||||
for creator_id in creator_id_list:
|
||||
task = asyncio.create_task(self.get_creator_details(creator_id, semaphore), name=creator_id)
|
||||
task = asyncio.create_task(self.get_creator_details(creator_id, semaphore), name=str(creator_id))
|
||||
task_list.append(task)
|
||||
except Exception as e:
|
||||
utils.logger.warning(f"[BilibiliCrawler.get_all_creator_details] error in the task list. The creator will not be included. {e}")
|
||||
|
||||
@@ -9,15 +9,17 @@
|
||||
# 使用本代码即表示您同意遵守上述原则和LICENSE中的所有条款。
|
||||
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
# -*- coding: utf-8 -*-
|
||||
# @Author : relakkes@gmail.com
|
||||
# @Time : 2023/12/2 23:26
|
||||
# @Desc : bilibili 请求参数签名
|
||||
# 逆向实现参考:https://socialsisteryi.github.io/bilibili-API-collect/docs/misc/sign/wbi.html#wbi%E7%AD%BE%E5%90%8D%E7%AE%97%E6%B3%95
|
||||
import re
|
||||
import urllib.parse
|
||||
from hashlib import md5
|
||||
from typing import Dict
|
||||
|
||||
from model.m_bilibili import VideoUrlInfo, CreatorUrlInfo
|
||||
from tools import utils
|
||||
|
||||
|
||||
@@ -66,16 +68,71 @@ class BilibiliSign:
|
||||
return req_data
|
||||
|
||||
|
||||
def parse_video_info_from_url(url: str) -> VideoUrlInfo:
|
||||
"""
|
||||
从B站视频URL中解析出视频ID
|
||||
Args:
|
||||
url: B站视频链接
|
||||
- https://www.bilibili.com/video/BV1dwuKzmE26/?spm_id_from=333.1387.homepage.video_card.click
|
||||
- https://www.bilibili.com/video/BV1d54y1g7db
|
||||
- BV1d54y1g7db (直接传入BV号)
|
||||
Returns:
|
||||
VideoUrlInfo: 包含视频ID的对象
|
||||
"""
|
||||
# 如果传入的已经是BV号,直接返回
|
||||
if url.startswith("BV"):
|
||||
return VideoUrlInfo(video_id=url)
|
||||
|
||||
# 使用正则表达式提取BV号
|
||||
# 匹配 /video/BV... 或 /video/av... 格式
|
||||
bv_pattern = r'/video/(BV[a-zA-Z0-9]+)'
|
||||
match = re.search(bv_pattern, url)
|
||||
|
||||
if match:
|
||||
video_id = match.group(1)
|
||||
return VideoUrlInfo(video_id=video_id)
|
||||
|
||||
raise ValueError(f"无法从URL中解析出视频ID: {url}")
|
||||
|
||||
|
||||
def parse_creator_info_from_url(url: str) -> CreatorUrlInfo:
|
||||
"""
|
||||
从B站创作者空间URL中解析出创作者ID
|
||||
Args:
|
||||
url: B站创作者空间链接
|
||||
- https://space.bilibili.com/434377496?spm_id_from=333.1007.0.0
|
||||
- https://space.bilibili.com/20813884
|
||||
- 434377496 (直接传入UID)
|
||||
Returns:
|
||||
CreatorUrlInfo: 包含创作者ID的对象
|
||||
"""
|
||||
# 如果传入的已经是纯数字ID,直接返回
|
||||
if url.isdigit():
|
||||
return CreatorUrlInfo(creator_id=url)
|
||||
|
||||
# 使用正则表达式提取UID
|
||||
# 匹配 /space.bilibili.com/数字 格式
|
||||
uid_pattern = r'space\.bilibili\.com/(\d+)'
|
||||
match = re.search(uid_pattern, url)
|
||||
|
||||
if match:
|
||||
creator_id = match.group(1)
|
||||
return CreatorUrlInfo(creator_id=creator_id)
|
||||
|
||||
raise ValueError(f"无法从URL中解析出创作者ID: {url}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
_img_key = "7cd084941338484aae1ad9425b84077c"
|
||||
_sub_key = "4932caff0ff746eab6f01bf08b70ac45"
|
||||
_search_url = "__refresh__=true&_extra=&ad_resource=5654&category_id=&context=&dynamic_offset=0&from_source=&from_spmid=333.337&gaia_vtoken=&highlight=1&keyword=python&order=click&page=1&page_size=20&platform=pc&qv_id=OQ8f2qtgYdBV1UoEnqXUNUl8LEDAdzsD&search_type=video&single_column=0&source_tag=3&web_location=1430654"
|
||||
_req_data = dict()
|
||||
for params in _search_url.split("&"):
|
||||
kvalues = params.split("=")
|
||||
key = kvalues[0]
|
||||
value = kvalues[1]
|
||||
_req_data[key] = value
|
||||
print("pre req_data", _req_data)
|
||||
_req_data = BilibiliSign(img_key=_img_key, sub_key=_sub_key).sign(req_data={"aid":170001})
|
||||
print(_req_data)
|
||||
# 测试视频URL解析
|
||||
video_url1 = "https://www.bilibili.com/video/BV1dwuKzmE26/?spm_id_from=333.1387.homepage.video_card.click"
|
||||
video_url2 = "BV1d54y1g7db"
|
||||
print("视频URL解析测试:")
|
||||
print(f"URL1: {video_url1} -> {parse_video_info_from_url(video_url1)}")
|
||||
print(f"URL2: {video_url2} -> {parse_video_info_from_url(video_url2)}")
|
||||
|
||||
# 测试创作者URL解析
|
||||
creator_url1 = "https://space.bilibili.com/434377496?spm_id_from=333.1007.0.0"
|
||||
creator_url2 = "20813884"
|
||||
print("\n创作者URL解析测试:")
|
||||
print(f"URL1: {creator_url1} -> {parse_creator_info_from_url(creator_url1)}")
|
||||
print(f"URL2: {creator_url2} -> {parse_creator_info_from_url(creator_url2)}")
|
||||
|
||||
@@ -324,3 +324,28 @@ class DouYinClient(AbstractApiClient):
|
||||
except httpx.HTTPError as exc: # some wrong when call httpx.request method, such as connection error, client error, server error or response status code is not 2xx
|
||||
utils.logger.error(f"[DouYinClient.get_aweme_media] {exc.__class__.__name__} for {exc.request.url} - {exc}") # 保留原始异常类型名称,以便开发者调试
|
||||
return None
|
||||
|
||||
async def resolve_short_url(self, short_url: str) -> str:
|
||||
"""
|
||||
解析抖音短链接,获取重定向后的真实URL
|
||||
Args:
|
||||
short_url: 短链接,如 https://v.douyin.com/iF12345ABC/
|
||||
Returns:
|
||||
重定向后的完整URL
|
||||
"""
|
||||
async with httpx.AsyncClient(proxy=self.proxy, follow_redirects=False) as client:
|
||||
try:
|
||||
utils.logger.info(f"[DouYinClient.resolve_short_url] Resolving short URL: {short_url}")
|
||||
response = await client.get(short_url, timeout=10)
|
||||
|
||||
# 短链接通常返回302重定向
|
||||
if response.status_code in [301, 302, 303, 307, 308]:
|
||||
redirect_url = response.headers.get("Location", "")
|
||||
utils.logger.info(f"[DouYinClient.resolve_short_url] Resolved to: {redirect_url}")
|
||||
return redirect_url
|
||||
else:
|
||||
utils.logger.warning(f"[DouYinClient.resolve_short_url] Unexpected status code: {response.status_code}")
|
||||
return ""
|
||||
except Exception as e:
|
||||
utils.logger.error(f"[DouYinClient.resolve_short_url] Failed to resolve short URL: {e}")
|
||||
return ""
|
||||
|
||||
@@ -33,6 +33,7 @@ from var import crawler_type_var, source_keyword_var
|
||||
from .client import DouYinClient
|
||||
from .exception import DataFetchError
|
||||
from .field import PublishTimeType
|
||||
from .help import parse_video_info_from_url, parse_creator_info_from_url
|
||||
from .login import DouYinLogin
|
||||
|
||||
|
||||
@@ -154,15 +155,39 @@ class DouYinCrawler(AbstractCrawler):
|
||||
await self.batch_get_note_comments(aweme_list)
|
||||
|
||||
async def get_specified_awemes(self):
|
||||
"""Get the information and comments of the specified post"""
|
||||
"""Get the information and comments of the specified post from URLs or IDs"""
|
||||
utils.logger.info("[DouYinCrawler.get_specified_awemes] Parsing video URLs...")
|
||||
aweme_id_list = []
|
||||
for video_url in config.DY_SPECIFIED_ID_LIST:
|
||||
try:
|
||||
video_info = parse_video_info_from_url(video_url)
|
||||
|
||||
# 处理短链接
|
||||
if video_info.url_type == "short":
|
||||
utils.logger.info(f"[DouYinCrawler.get_specified_awemes] Resolving short link: {video_url}")
|
||||
resolved_url = await self.dy_client.resolve_short_url(video_url)
|
||||
if resolved_url:
|
||||
# 从解析后的URL中提取视频ID
|
||||
video_info = parse_video_info_from_url(resolved_url)
|
||||
utils.logger.info(f"[DouYinCrawler.get_specified_awemes] Short link resolved to aweme ID: {video_info.aweme_id}")
|
||||
else:
|
||||
utils.logger.error(f"[DouYinCrawler.get_specified_awemes] Failed to resolve short link: {video_url}")
|
||||
continue
|
||||
|
||||
aweme_id_list.append(video_info.aweme_id)
|
||||
utils.logger.info(f"[DouYinCrawler.get_specified_awemes] Parsed aweme ID: {video_info.aweme_id} from {video_url}")
|
||||
except ValueError as e:
|
||||
utils.logger.error(f"[DouYinCrawler.get_specified_awemes] Failed to parse video URL: {e}")
|
||||
continue
|
||||
|
||||
semaphore = asyncio.Semaphore(config.MAX_CONCURRENCY_NUM)
|
||||
task_list = [self.get_aweme_detail(aweme_id=aweme_id, semaphore=semaphore) for aweme_id in config.DY_SPECIFIED_ID_LIST]
|
||||
task_list = [self.get_aweme_detail(aweme_id=aweme_id, semaphore=semaphore) for aweme_id in aweme_id_list]
|
||||
aweme_details = await asyncio.gather(*task_list)
|
||||
for aweme_detail in aweme_details:
|
||||
if aweme_detail is not None:
|
||||
await douyin_store.update_douyin_aweme(aweme_item=aweme_detail)
|
||||
await self.get_aweme_media(aweme_item=aweme_detail)
|
||||
await self.batch_get_note_comments(config.DY_SPECIFIED_ID_LIST)
|
||||
await self.batch_get_note_comments(aweme_id_list)
|
||||
|
||||
async def get_aweme_detail(self, aweme_id: str, semaphore: asyncio.Semaphore) -> Any:
|
||||
"""Get note detail"""
|
||||
@@ -218,10 +243,20 @@ class DouYinCrawler(AbstractCrawler):
|
||||
|
||||
async def get_creators_and_videos(self) -> None:
|
||||
"""
|
||||
Get the information and videos of the specified creator
|
||||
Get the information and videos of the specified creator from URLs or IDs
|
||||
"""
|
||||
utils.logger.info("[DouYinCrawler.get_creators_and_videos] Begin get douyin creators")
|
||||
for user_id in config.DY_CREATOR_ID_LIST:
|
||||
utils.logger.info("[DouYinCrawler.get_creators_and_videos] Parsing creator URLs...")
|
||||
|
||||
for creator_url in config.DY_CREATOR_ID_LIST:
|
||||
try:
|
||||
creator_info_parsed = parse_creator_info_from_url(creator_url)
|
||||
user_id = creator_info_parsed.sec_user_id
|
||||
utils.logger.info(f"[DouYinCrawler.get_creators_and_videos] Parsed sec_user_id: {user_id} from {creator_url}")
|
||||
except ValueError as e:
|
||||
utils.logger.error(f"[DouYinCrawler.get_creators_and_videos] Failed to parse creator URL: {e}")
|
||||
continue
|
||||
|
||||
creator_info: Dict = await self.dy_client.get_user_info(user_id)
|
||||
if creator_info:
|
||||
await douyin_store.save_creator(user_id, creator=creator_info)
|
||||
|
||||
@@ -16,10 +16,15 @@
|
||||
# @Desc : 获取 a_bogus 参数, 学习交流使用,请勿用作商业用途,侵权联系作者删除
|
||||
|
||||
import random
|
||||
import re
|
||||
from typing import Optional
|
||||
|
||||
import execjs
|
||||
from playwright.async_api import Page
|
||||
|
||||
from model.m_douyin import VideoUrlInfo, CreatorUrlInfo
|
||||
from tools.crawler_util import extract_url_params_to_dict
|
||||
|
||||
douyin_sign_obj = execjs.compile(open('libs/douyin.js', encoding='utf-8-sig').read())
|
||||
|
||||
def get_web_id():
|
||||
@@ -83,3 +88,103 @@ async def get_a_bogus_from_playright(params: str, post_data: dict, user_agent: s
|
||||
|
||||
return a_bogus
|
||||
|
||||
|
||||
def parse_video_info_from_url(url: str) -> VideoUrlInfo:
|
||||
"""
|
||||
从抖音视频URL中解析出视频ID
|
||||
支持以下格式:
|
||||
1. 普通视频链接: https://www.douyin.com/video/7525082444551310602
|
||||
2. 带modal_id参数的链接:
|
||||
- https://www.douyin.com/user/MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE?modal_id=7525082444551310602
|
||||
- https://www.douyin.com/root/search/python?modal_id=7471165520058862848
|
||||
3. 短链接: https://v.douyin.com/iF12345ABC/ (需要client解析)
|
||||
4. 纯ID: 7525082444551310602
|
||||
|
||||
Args:
|
||||
url: 抖音视频链接或ID
|
||||
Returns:
|
||||
VideoUrlInfo: 包含视频ID的对象
|
||||
"""
|
||||
# 如果是纯数字ID,直接返回
|
||||
if url.isdigit():
|
||||
return VideoUrlInfo(aweme_id=url, url_type="normal")
|
||||
|
||||
# 检查是否是短链接 (v.douyin.com)
|
||||
if "v.douyin.com" in url or url.startswith("http") and len(url) < 50 and "video" not in url:
|
||||
return VideoUrlInfo(aweme_id="", url_type="short") # 需要通过client解析
|
||||
|
||||
# 尝试从URL参数中提取modal_id
|
||||
params = extract_url_params_to_dict(url)
|
||||
modal_id = params.get("modal_id")
|
||||
if modal_id:
|
||||
return VideoUrlInfo(aweme_id=modal_id, url_type="modal")
|
||||
|
||||
# 从标准视频URL中提取ID: /video/数字
|
||||
video_pattern = r'/video/(\d+)'
|
||||
match = re.search(video_pattern, url)
|
||||
if match:
|
||||
aweme_id = match.group(1)
|
||||
return VideoUrlInfo(aweme_id=aweme_id, url_type="normal")
|
||||
|
||||
raise ValueError(f"无法从URL中解析出视频ID: {url}")
|
||||
|
||||
|
||||
def parse_creator_info_from_url(url: str) -> CreatorUrlInfo:
|
||||
"""
|
||||
从抖音创作者主页URL中解析出创作者ID (sec_user_id)
|
||||
支持以下格式:
|
||||
1. 创作者主页: https://www.douyin.com/user/MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE?from_tab_name=main
|
||||
2. 纯ID: MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE
|
||||
|
||||
Args:
|
||||
url: 抖音创作者主页链接或sec_user_id
|
||||
Returns:
|
||||
CreatorUrlInfo: 包含创作者ID的对象
|
||||
"""
|
||||
# 如果是纯ID格式(通常以MS4wLjABAAAA开头),直接返回
|
||||
if url.startswith("MS4wLjABAAAA") or (not url.startswith("http") and "douyin.com" not in url):
|
||||
return CreatorUrlInfo(sec_user_id=url)
|
||||
|
||||
# 从创作者主页URL中提取sec_user_id: /user/xxx
|
||||
user_pattern = r'/user/([^/?]+)'
|
||||
match = re.search(user_pattern, url)
|
||||
if match:
|
||||
sec_user_id = match.group(1)
|
||||
return CreatorUrlInfo(sec_user_id=sec_user_id)
|
||||
|
||||
raise ValueError(f"无法从URL中解析出创作者ID: {url}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# 测试视频URL解析
|
||||
print("=== 视频URL解析测试 ===")
|
||||
test_urls = [
|
||||
"https://www.douyin.com/video/7525082444551310602",
|
||||
"https://www.douyin.com/user/MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE?from_tab_name=main&modal_id=7525082444551310602",
|
||||
"https://www.douyin.com/root/search/python?aid=b733a3b0-4662-4639-9a72-c2318fba9f3f&modal_id=7471165520058862848&type=general",
|
||||
"7525082444551310602",
|
||||
]
|
||||
for url in test_urls:
|
||||
try:
|
||||
result = parse_video_info_from_url(url)
|
||||
print(f"✓ URL: {url[:80]}...")
|
||||
print(f" 结果: {result}\n")
|
||||
except Exception as e:
|
||||
print(f"✗ URL: {url}")
|
||||
print(f" 错误: {e}\n")
|
||||
|
||||
# 测试创作者URL解析
|
||||
print("=== 创作者URL解析测试 ===")
|
||||
test_creator_urls = [
|
||||
"https://www.douyin.com/user/MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE?from_tab_name=main",
|
||||
"MS4wLjABAAAATJPY7LAlaa5X-c8uNdWkvz0jUGgpw4eeXIwu_8BhvqE",
|
||||
]
|
||||
for url in test_creator_urls:
|
||||
try:
|
||||
result = parse_creator_info_from_url(url)
|
||||
print(f"✓ URL: {url[:80]}...")
|
||||
print(f" 结果: {result}\n")
|
||||
except Exception as e:
|
||||
print(f"✗ URL: {url}")
|
||||
print(f" 错误: {e}\n")
|
||||
|
||||
|
||||
@@ -26,6 +26,7 @@ from playwright.async_api import (
|
||||
|
||||
import config
|
||||
from base.base_crawler import AbstractCrawler
|
||||
from model.m_kuaishou import VideoUrlInfo, CreatorUrlInfo
|
||||
from proxy.proxy_ip_pool import IpInfoModel, create_ip_pool
|
||||
from store import kuaishou as kuaishou_store
|
||||
from tools import utils
|
||||
@@ -34,6 +35,7 @@ from var import comment_tasks_var, crawler_type_var, source_keyword_var
|
||||
|
||||
from .client import KuaiShouClient
|
||||
from .exception import DataFetchError
|
||||
from .help import parse_video_info_from_url, parse_creator_info_from_url
|
||||
from .login import KuaishouLogin
|
||||
|
||||
|
||||
@@ -168,16 +170,27 @@ class KuaishouCrawler(AbstractCrawler):
|
||||
|
||||
async def get_specified_videos(self):
|
||||
"""Get the information and comments of the specified post"""
|
||||
utils.logger.info("[KuaishouCrawler.get_specified_videos] Parsing video URLs...")
|
||||
video_ids = []
|
||||
for video_url in config.KS_SPECIFIED_ID_LIST:
|
||||
try:
|
||||
video_info = parse_video_info_from_url(video_url)
|
||||
video_ids.append(video_info.video_id)
|
||||
utils.logger.info(f"Parsed video ID: {video_info.video_id} from {video_url}")
|
||||
except ValueError as e:
|
||||
utils.logger.error(f"Failed to parse video URL: {e}")
|
||||
continue
|
||||
|
||||
semaphore = asyncio.Semaphore(config.MAX_CONCURRENCY_NUM)
|
||||
task_list = [
|
||||
self.get_video_info_task(video_id=video_id, semaphore=semaphore)
|
||||
for video_id in config.KS_SPECIFIED_ID_LIST
|
||||
for video_id in video_ids
|
||||
]
|
||||
video_details = await asyncio.gather(*task_list)
|
||||
for video_detail in video_details:
|
||||
if video_detail is not None:
|
||||
await kuaishou_store.update_kuaishou_video(video_detail)
|
||||
await self.batch_get_video_comments(config.KS_SPECIFIED_ID_LIST)
|
||||
await self.batch_get_video_comments(video_ids)
|
||||
|
||||
async def get_video_info_task(
|
||||
self, video_id: str, semaphore: asyncio.Semaphore
|
||||
@@ -367,16 +380,25 @@ class KuaishouCrawler(AbstractCrawler):
|
||||
utils.logger.info(
|
||||
"[KuaiShouCrawler.get_creators_and_videos] Begin get kuaishou creators"
|
||||
)
|
||||
for user_id in config.KS_CREATOR_ID_LIST:
|
||||
# get creator detail info from web html content
|
||||
createor_info: Dict = await self.ks_client.get_creator_info(user_id=user_id)
|
||||
if createor_info:
|
||||
await kuaishou_store.save_creator(user_id, creator=createor_info)
|
||||
for creator_url in config.KS_CREATOR_ID_LIST:
|
||||
try:
|
||||
# Parse creator URL to get user_id
|
||||
creator_info: CreatorUrlInfo = parse_creator_info_from_url(creator_url)
|
||||
utils.logger.info(f"[KuaiShouCrawler.get_creators_and_videos] Parse creator URL info: {creator_info}")
|
||||
user_id = creator_info.user_id
|
||||
|
||||
# get creator detail info from web html content
|
||||
createor_info: Dict = await self.ks_client.get_creator_info(user_id=user_id)
|
||||
if createor_info:
|
||||
await kuaishou_store.save_creator(user_id, creator=createor_info)
|
||||
except ValueError as e:
|
||||
utils.logger.error(f"[KuaiShouCrawler.get_creators_and_videos] Failed to parse creator URL: {e}")
|
||||
continue
|
||||
|
||||
# Get all video information of the creator
|
||||
all_video_list = await self.ks_client.get_all_videos_by_creator(
|
||||
user_id=user_id,
|
||||
crawl_interval=random.random(),
|
||||
crawl_interval=config.CRAWLER_MAX_SLEEP_SEC,
|
||||
callback=self.fetch_creator_video_detail,
|
||||
)
|
||||
|
||||
|
||||
99
media_platform/kuaishou/help.py
Normal file
99
media_platform/kuaishou/help.py
Normal file
@@ -0,0 +1,99 @@
|
||||
# 声明:本代码仅供学习和研究目的使用。使用者应遵守以下原则:
|
||||
# 1. 不得用于任何商业用途。
|
||||
# 2. 使用时应遵守目标平台的使用条款和robots.txt规则。
|
||||
# 3. 不得进行大规模爬取或对平台造成运营干扰。
|
||||
# 4. 应合理控制请求频率,避免给目标平台带来不必要的负担。
|
||||
# 5. 不得用于任何非法或不当的用途。
|
||||
#
|
||||
# 详细许可条款请参阅项目根目录下的LICENSE文件。
|
||||
# 使用本代码即表示您同意遵守上述原则和LICENSE中的所有条款。
|
||||
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import re
|
||||
from model.m_kuaishou import VideoUrlInfo, CreatorUrlInfo
|
||||
|
||||
|
||||
def parse_video_info_from_url(url: str) -> VideoUrlInfo:
|
||||
"""
|
||||
从快手视频URL中解析出视频ID
|
||||
支持以下格式:
|
||||
1. 完整视频URL: "https://www.kuaishou.com/short-video/3x3zxz4mjrsc8ke?authorId=3x84qugg4ch9zhs&streamSource=search"
|
||||
2. 纯视频ID: "3x3zxz4mjrsc8ke"
|
||||
|
||||
Args:
|
||||
url: 快手视频链接或视频ID
|
||||
Returns:
|
||||
VideoUrlInfo: 包含视频ID的对象
|
||||
"""
|
||||
# 如果不包含http且不包含kuaishou.com,认为是纯ID
|
||||
if not url.startswith("http") and "kuaishou.com" not in url:
|
||||
return VideoUrlInfo(video_id=url, url_type="normal")
|
||||
|
||||
# 从标准视频URL中提取ID: /short-video/视频ID
|
||||
video_pattern = r'/short-video/([a-zA-Z0-9_-]+)'
|
||||
match = re.search(video_pattern, url)
|
||||
if match:
|
||||
video_id = match.group(1)
|
||||
return VideoUrlInfo(video_id=video_id, url_type="normal")
|
||||
|
||||
raise ValueError(f"无法从URL中解析出视频ID: {url}")
|
||||
|
||||
|
||||
def parse_creator_info_from_url(url: str) -> CreatorUrlInfo:
|
||||
"""
|
||||
从快手创作者主页URL中解析出创作者ID
|
||||
支持以下格式:
|
||||
1. 创作者主页: "https://www.kuaishou.com/profile/3x84qugg4ch9zhs"
|
||||
2. 纯ID: "3x4sm73aye7jq7i"
|
||||
|
||||
Args:
|
||||
url: 快手创作者主页链接或user_id
|
||||
Returns:
|
||||
CreatorUrlInfo: 包含创作者ID的对象
|
||||
"""
|
||||
# 如果不包含http且不包含kuaishou.com,认为是纯ID
|
||||
if not url.startswith("http") and "kuaishou.com" not in url:
|
||||
return CreatorUrlInfo(user_id=url)
|
||||
|
||||
# 从创作者主页URL中提取user_id: /profile/xxx
|
||||
user_pattern = r'/profile/([a-zA-Z0-9_-]+)'
|
||||
match = re.search(user_pattern, url)
|
||||
if match:
|
||||
user_id = match.group(1)
|
||||
return CreatorUrlInfo(user_id=user_id)
|
||||
|
||||
raise ValueError(f"无法从URL中解析出创作者ID: {url}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
# 测试视频URL解析
|
||||
print("=== 视频URL解析测试 ===")
|
||||
test_video_urls = [
|
||||
"https://www.kuaishou.com/short-video/3x3zxz4mjrsc8ke?authorId=3x84qugg4ch9zhs&streamSource=search&area=searchxxnull&searchKey=python",
|
||||
"3xf8enb8dbj6uig",
|
||||
]
|
||||
for url in test_video_urls:
|
||||
try:
|
||||
result = parse_video_info_from_url(url)
|
||||
print(f"✓ URL: {url[:80]}...")
|
||||
print(f" 结果: {result}\n")
|
||||
except Exception as e:
|
||||
print(f"✗ URL: {url}")
|
||||
print(f" 错误: {e}\n")
|
||||
|
||||
# 测试创作者URL解析
|
||||
print("=== 创作者URL解析测试 ===")
|
||||
test_creator_urls = [
|
||||
"https://www.kuaishou.com/profile/3x84qugg4ch9zhs",
|
||||
"3x4sm73aye7jq7i",
|
||||
]
|
||||
for url in test_creator_urls:
|
||||
try:
|
||||
result = parse_creator_info_from_url(url)
|
||||
print(f"✓ URL: {url[:80]}...")
|
||||
print(f" 结果: {result}\n")
|
||||
except Exception as e:
|
||||
print(f"✗ URL: {url}")
|
||||
print(f" 错误: {e}\n")
|
||||
@@ -451,13 +451,26 @@ class XiaoHongShuClient(AbstractApiClient):
|
||||
result.extend(comments)
|
||||
return result
|
||||
|
||||
async def get_creator_info(self, user_id: str) -> Dict:
|
||||
async def get_creator_info(
|
||||
self, user_id: str, xsec_token: str = "", xsec_source: str = ""
|
||||
) -> Dict:
|
||||
"""
|
||||
通过解析网页版的用户主页HTML,获取用户个人简要信息
|
||||
PC端用户主页的网页存在window.__INITIAL_STATE__这个变量上的,解析它即可
|
||||
eg: https://www.xiaohongshu.com/user/profile/59d8cb33de5fb4696bf17217
|
||||
|
||||
Args:
|
||||
user_id: 用户ID
|
||||
xsec_token: 验证token (可选,如果URL中包含此参数则传入)
|
||||
xsec_source: 渠道来源 (可选,如果URL中包含此参数则传入)
|
||||
|
||||
Returns:
|
||||
Dict: 创作者信息
|
||||
"""
|
||||
# 构建URI,如果有xsec参数则添加到URL中
|
||||
uri = f"/user/profile/{user_id}"
|
||||
if xsec_token and xsec_source:
|
||||
uri = f"{uri}?xsec_token={xsec_token}&xsec_source={xsec_source}"
|
||||
|
||||
html_content = await self.request(
|
||||
"GET", self._domain + uri, return_response=True, headers=self.headers
|
||||
)
|
||||
|
||||
@@ -26,7 +26,7 @@ from tenacity import RetryError
|
||||
import config
|
||||
from base.base_crawler import AbstractCrawler
|
||||
from config import CRAWLER_MAX_COMMENTS_COUNT_SINGLENOTES
|
||||
from model.m_xiaohongshu import NoteUrlInfo
|
||||
from model.m_xiaohongshu import NoteUrlInfo, CreatorUrlInfo
|
||||
from proxy.proxy_ip_pool import IpInfoModel, create_ip_pool
|
||||
from store import xhs as xhs_store
|
||||
from tools import utils
|
||||
@@ -36,7 +36,7 @@ from var import crawler_type_var, source_keyword_var
|
||||
from .client import XiaoHongShuClient
|
||||
from .exception import DataFetchError
|
||||
from .field import SearchSortType
|
||||
from .help import parse_note_info_from_note_url, get_search_id
|
||||
from .help import parse_note_info_from_note_url, parse_creator_info_from_url, get_search_id
|
||||
from .login import XiaoHongShuLogin
|
||||
|
||||
|
||||
@@ -174,11 +174,24 @@ class XiaoHongShuCrawler(AbstractCrawler):
|
||||
async def get_creators_and_notes(self) -> None:
|
||||
"""Get creator's notes and retrieve their comment information."""
|
||||
utils.logger.info("[XiaoHongShuCrawler.get_creators_and_notes] Begin get xiaohongshu creators")
|
||||
for user_id in config.XHS_CREATOR_ID_LIST:
|
||||
# get creator detail info from web html content
|
||||
createor_info: Dict = await self.xhs_client.get_creator_info(user_id=user_id)
|
||||
if createor_info:
|
||||
await xhs_store.save_creator(user_id, creator=createor_info)
|
||||
for creator_url in config.XHS_CREATOR_ID_LIST:
|
||||
try:
|
||||
# Parse creator URL to get user_id and security tokens
|
||||
creator_info: CreatorUrlInfo = parse_creator_info_from_url(creator_url)
|
||||
utils.logger.info(f"[XiaoHongShuCrawler.get_creators_and_notes] Parse creator URL info: {creator_info}")
|
||||
user_id = creator_info.user_id
|
||||
|
||||
# get creator detail info from web html content
|
||||
createor_info: Dict = await self.xhs_client.get_creator_info(
|
||||
user_id=user_id,
|
||||
xsec_token=creator_info.xsec_token,
|
||||
xsec_source=creator_info.xsec_source
|
||||
)
|
||||
if createor_info:
|
||||
await xhs_store.save_creator(user_id, creator=createor_info)
|
||||
except ValueError as e:
|
||||
utils.logger.error(f"[XiaoHongShuCrawler.get_creators_and_notes] Failed to parse creator URL: {e}")
|
||||
continue
|
||||
|
||||
# Use fixed crawling interval
|
||||
crawl_interval = config.CRAWLER_MAX_SLEEP_SEC
|
||||
@@ -271,7 +284,7 @@ class XiaoHongShuCrawler(AbstractCrawler):
|
||||
|
||||
try:
|
||||
note_detail = await self.xhs_client.get_note_by_id(note_id, xsec_source, xsec_token)
|
||||
except RetryError as e:
|
||||
except RetryError:
|
||||
pass
|
||||
|
||||
if not note_detail:
|
||||
|
||||
@@ -15,7 +15,7 @@ import random
|
||||
import time
|
||||
import urllib.parse
|
||||
|
||||
from model.m_xiaohongshu import NoteUrlInfo
|
||||
from model.m_xiaohongshu import NoteUrlInfo, CreatorUrlInfo
|
||||
from tools.crawler_util import extract_url_params_to_dict
|
||||
|
||||
|
||||
@@ -306,6 +306,37 @@ def parse_note_info_from_note_url(url: str) -> NoteUrlInfo:
|
||||
return NoteUrlInfo(note_id=note_id, xsec_token=xsec_token, xsec_source=xsec_source)
|
||||
|
||||
|
||||
def parse_creator_info_from_url(url: str) -> CreatorUrlInfo:
|
||||
"""
|
||||
从小红书创作者主页URL中解析出创作者信息
|
||||
支持以下格式:
|
||||
1. 完整URL: "https://www.xiaohongshu.com/user/profile/5eb8e1d400000000010075ae?xsec_token=AB1nWBKCo1vE2HEkfoJUOi5B6BE5n7wVrbdpHoWIj5xHw=&xsec_source=pc_feed"
|
||||
2. 纯ID: "5eb8e1d400000000010075ae"
|
||||
|
||||
Args:
|
||||
url: 创作者主页URL或user_id
|
||||
Returns:
|
||||
CreatorUrlInfo: 包含user_id, xsec_token, xsec_source的对象
|
||||
"""
|
||||
# 如果是纯ID格式(24位十六进制字符),直接返回
|
||||
if len(url) == 24 and all(c in "0123456789abcdef" for c in url):
|
||||
return CreatorUrlInfo(user_id=url, xsec_token="", xsec_source="")
|
||||
|
||||
# 从URL中提取user_id: /user/profile/xxx
|
||||
import re
|
||||
user_pattern = r'/user/profile/([^/?]+)'
|
||||
match = re.search(user_pattern, url)
|
||||
if match:
|
||||
user_id = match.group(1)
|
||||
# 提取xsec_token和xsec_source参数
|
||||
params = extract_url_params_to_dict(url)
|
||||
xsec_token = params.get("xsec_token", "")
|
||||
xsec_source = params.get("xsec_source", "")
|
||||
return CreatorUrlInfo(user_id=user_id, xsec_token=xsec_token, xsec_source=xsec_source)
|
||||
|
||||
raise ValueError(f"无法从URL中解析出创作者信息: {url}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
_img_url = "https://sns-img-bd.xhscdn.com/7a3abfaf-90c1-a828-5de7-022c80b92aa3"
|
||||
# 获取一个图片地址在多个cdn下的url地址
|
||||
@@ -313,4 +344,19 @@ if __name__ == '__main__':
|
||||
final_img_url = get_img_url_by_trace_id(get_trace_id(_img_url))
|
||||
print(final_img_url)
|
||||
|
||||
# 测试创作者URL解析
|
||||
print("\n=== 创作者URL解析测试 ===")
|
||||
test_creator_urls = [
|
||||
"https://www.xiaohongshu.com/user/profile/5eb8e1d400000000010075ae?xsec_token=AB1nWBKCo1vE2HEkfoJUOi5B6BE5n7wVrbdpHoWIj5xHw=&xsec_source=pc_feed",
|
||||
"5eb8e1d400000000010075ae",
|
||||
]
|
||||
for url in test_creator_urls:
|
||||
try:
|
||||
result = parse_creator_info_from_url(url)
|
||||
print(f"✓ URL: {url[:80]}...")
|
||||
print(f" 结果: {result}\n")
|
||||
except Exception as e:
|
||||
print(f"✗ URL: {url}")
|
||||
print(f" 错误: {e}\n")
|
||||
|
||||
|
||||
|
||||
25
model/m_bilibili.py
Normal file
25
model/m_bilibili.py
Normal file
@@ -0,0 +1,25 @@
|
||||
# 声明:本代码仅供学习和研究目的使用。使用者应遵守以下原则:
|
||||
# 1. 不得用于任何商业用途。
|
||||
# 2. 使用时应遵守目标平台的使用条款和robots.txt规则。
|
||||
# 3. 不得进行大规模爬取或对平台造成运营干扰。
|
||||
# 4. 应合理控制请求频率,避免给目标平台带来不必要的负担。
|
||||
# 5. 不得用于任何非法或不当的用途。
|
||||
#
|
||||
# 详细许可条款请参阅项目根目录下的LICENSE文件。
|
||||
# 使用本代码即表示您同意遵守上述原则和LICENSE中的所有条款。
|
||||
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class VideoUrlInfo(BaseModel):
|
||||
"""B站视频URL信息"""
|
||||
video_id: str = Field(title="video id (BV id)")
|
||||
video_type: str = Field(default="video", title="video type")
|
||||
|
||||
|
||||
class CreatorUrlInfo(BaseModel):
|
||||
"""B站创作者URL信息"""
|
||||
creator_id: str = Field(title="creator id (UID)")
|
||||
@@ -1,12 +1,25 @@
|
||||
# 声明:本代码仅供学习和研究目的使用。使用者应遵守以下原则:
|
||||
# 1. 不得用于任何商业用途。
|
||||
# 2. 使用时应遵守目标平台的使用条款和robots.txt规则。
|
||||
# 3. 不得进行大规模爬取或对平台造成运营干扰。
|
||||
# 4. 应合理控制请求频率,避免给目标平台带来不必要的负担。
|
||||
# 声明:本代码仅供学习和研究目的使用。使用者应遵守以下原则:
|
||||
# 1. 不得用于任何商业用途。
|
||||
# 2. 使用时应遵守目标平台的使用条款和robots.txt规则。
|
||||
# 3. 不得进行大规模爬取或对平台造成运营干扰。
|
||||
# 4. 应合理控制请求频率,避免给目标平台带来不必要的负担。
|
||||
# 5. 不得用于任何非法或不当的用途。
|
||||
#
|
||||
# 详细许可条款请参阅项目根目录下的LICENSE文件。
|
||||
# 使用本代码即表示您同意遵守上述原则和LICENSE中的所有条款。
|
||||
#
|
||||
# 详细许可条款请参阅项目根目录下的LICENSE文件。
|
||||
# 使用本代码即表示您同意遵守上述原则和LICENSE中的所有条款。
|
||||
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class VideoUrlInfo(BaseModel):
|
||||
"""抖音视频URL信息"""
|
||||
aweme_id: str = Field(title="aweme id (video id)")
|
||||
url_type: str = Field(default="normal", title="url type: normal, short, modal")
|
||||
|
||||
|
||||
class CreatorUrlInfo(BaseModel):
|
||||
"""抖音创作者URL信息"""
|
||||
sec_user_id: str = Field(title="sec_user_id (creator id)")
|
||||
|
||||
@@ -1,12 +1,25 @@
|
||||
# 声明:本代码仅供学习和研究目的使用。使用者应遵守以下原则:
|
||||
# 1. 不得用于任何商业用途。
|
||||
# 2. 使用时应遵守目标平台的使用条款和robots.txt规则。
|
||||
# 3. 不得进行大规模爬取或对平台造成运营干扰。
|
||||
# 4. 应合理控制请求频率,避免给目标平台带来不必要的负担。
|
||||
# 声明:本代码仅供学习和研究目的使用。使用者应遵守以下原则:
|
||||
# 1. 不得用于任何商业用途。
|
||||
# 2. 使用时应遵守目标平台的使用条款和robots.txt规则。
|
||||
# 3. 不得进行大规模爬取或对平台造成运营干扰。
|
||||
# 4. 应合理控制请求频率,避免给目标平台带来不必要的负担。
|
||||
# 5. 不得用于任何非法或不当的用途。
|
||||
#
|
||||
# 详细许可条款请参阅项目根目录下的LICENSE文件。
|
||||
# 使用本代码即表示您同意遵守上述原则和LICENSE中的所有条款。
|
||||
#
|
||||
# 详细许可条款请参阅项目根目录下的LICENSE文件。
|
||||
# 使用本代码即表示您同意遵守上述原则和LICENSE中的所有条款。
|
||||
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class VideoUrlInfo(BaseModel):
|
||||
"""快手视频URL信息"""
|
||||
video_id: str = Field(title="video id (photo id)")
|
||||
url_type: str = Field(default="normal", title="url type: normal")
|
||||
|
||||
|
||||
class CreatorUrlInfo(BaseModel):
|
||||
"""快手创作者URL信息"""
|
||||
user_id: str = Field(title="user id (creator id)")
|
||||
|
||||
@@ -18,4 +18,11 @@ from pydantic import BaseModel, Field
|
||||
class NoteUrlInfo(BaseModel):
|
||||
note_id: str = Field(title="note id")
|
||||
xsec_token: str = Field(title="xsec token")
|
||||
xsec_source: str = Field(title="xsec source")
|
||||
xsec_source: str = Field(title="xsec source")
|
||||
|
||||
|
||||
class CreatorUrlInfo(BaseModel):
|
||||
"""小红书创作者URL信息"""
|
||||
user_id: str = Field(title="user id (creator id)")
|
||||
xsec_token: str = Field(default="", title="xsec token")
|
||||
xsec_source: str = Field(default="", title="xsec source")
|
||||
Reference in New Issue
Block a user