欢迎光临外链推广平台制定符合客户需求的方案是我们的追求,专注提高链接传播速率使我们的质量!

注重优化外链推广模式提高相应产品链接的质量与传播速度!

外链推广平台的初衷就是帮客户找到更好更好的流量资源,充分发挥产品优势!

[python]淘宝商品比价信息定向爬虫

作者:jcmp      发布时间:2021-04-24      浏览量:0
按照北京理工嵩天老师课程代码,一步

按照北京理工嵩天老师课程代码,一步步实现。

目标:

获取淘宝页面根据输入关键词搜取淘宝信息比对价格。


主要实现:

1.淘宝的搜索接口


使用正则表达式:
URL = “ https://s.taobao.com/search?q= ” + keyword
2.淘宝商品存储信息结构

3.翻页的处理

主要问题:

淘宝的反爬虫机制导致简单的直接爬取信息失败,需要模拟浏览器访问的方式进行爬取。
定向爬虫的可行性判断:



结果显示不允许[disallow]

代码如下:

import requestsimport redef getHTMLText(url):    # 伪装成浏览器    kv = {'cookie':'cookie2=14df95ca1f48116a5a34610507b02333; t=d15ae95e1060852ad4243b96b295a7dc; _tb_token_=e75160e73b7e5; cna=M688FuCsY2wCAXDgRUH/hKTk; v=0; unb=3352836622; uc3=lg2=VT5L2FSpMGV7TQ%3D%3D&nk2=AniT9PpU6lsw%2BH60PN%2F7%2FXaT&vt3=F8dByuchWQ9bb7TaCU4%3D&id2=UNN4BKnQgACo7Q%3D%3D; csg=59443507; lgc=aichitudoudemao233; cookie17=UNN4BKnQgACo7Q%3D%3D; dnk=aichitudoudemao233; skt=3cb4f2957552001d; existShop=MTU3MjE5MjU4Ng%3D%3D; uc4=nk4=0%40AJNGw06trBMedW6r%2FQlFOOjBhgw85hpyqLnNC3I%3D&id4=0%40UgQwEYjMutDazneilIoLYcO2uOaz; tracknick=aichitudoudemao233; _cc_=UIHiLt3xSw%3D%3D; tg=0; _l_g_=Ug%3D%3D; sg=327; _nk_=aichitudoudemao233; cookie1=BxVXTygrONJyVN23g9iTU5Oz3K6wvjcRxGgKJfb0wrI%3D; thw=cn; mt=ci=115_1; enc=75b73ilmqJ9nY0ao0mcj2Mr8brnfEA6GkjQTPgwJgzKzc5zEWHGqBD1BhXlF7CLv63SmsH7llCpIgjAaHWWYSg%3D%3D; alitrackid=www.taobao.com; lastalitrackid=www.taobao.com; hng=CN%7Czh-CN%7CCNY%7C156; JSESSIONID=586CDB7DE478B86E36E2E2E19E99C986; uc1=cookie14=UoTbnxk2wn3EzA%3D%3D&cookie15=WqG3DMC9VAQiUQ%3D%3D; l=dBOA9jvVqGs3iHG2BOCalurza779SIRYBuPzaNbMi_5pc68sFq7OkaazPFJ6DjWfToYB4iuRp4J9-etbi-y06Pt-g3fPaxDc.; isg=BO7uNkXtP2w320sIiRQGkRYvP0Kw77LpGyUd6Bi3X_Gs-45VgH7S-ZV5tieyeqoB',          'user-agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36'}    try:        # 爬出网页源文件        r = requests.get(url, headers=kv, timeout=30)        r.raise_for_status()        r.encoding = 'utf-8'        return  r.text    except:        return ''def parsePage(ilt, html):    try:        plt = re.findall(r'\"view_price\"\:\"[\d\.]*\"',html)        tlt = re.findall(r'\"raw_title\"\:\".*?\"',html)        for i in range(len(plt)):            price = eval(plt[i].split(':')[1])            title = eval(tlt[i].split(':')[1])            ilt.append([price , title])    except:        print("")def printGoodsInfo(ilt):    tplt = "{:4}\t{:8}\t{:16}"    print(tplt.format("序号", "价格", "商品名称"))    count = 0    for g in ilt:        count = count + 1        print(tplt.format(count, g[0], g[1]))def main():    goods_name = '手机'    depth = 3    # 要爬取的网页    s_url = 'https://s.taobao.com/search?q=' + goods_name    infoList = []    # 翻页    for i in range(depth):        try:            url = s_url + '&s=' + str(44 * i)            html = getHTMLText(url)            parsePage(infoList, html)        except:            continue    printGoodsInfo(infoList)main()

如何获取cookie和user-agent

以chrome浏览器为例:
先打开开发者工具



点到network:



激活一下,打开search?q=%E6%89%8B%E6%...文档下拉即可找到cookie和user-agent:

默认下:

user-agent:Mozilla/5.0
cookie会变

运行结果:

淘宝一页显示44个商品,本例获取3页信息。