Requests 是用Python语言编写,基于 urllib,采用 Apache2 Licensed 开源协议的 HTTP 库。它比 urllib 更加方便,可以节约我们大量的工作,完全满足 HTTP 测试需求。Requests 的哲学是以 PEP 20 的习语为中心开发的,所以它比 urllib 更加 Pythoner。更重要的一点是它支持 Python3 哦! request库的安装可参见博客:http://blog.csdn.net/qq_29186489/article/details/78581249
带参数的GET请求
import requests #带参数的URL请求,直接在URL中写 response=requests.get("http://httpbin.org/get?name=germey&age=22") print(response.text) #带参数的URL请求,数据以集合的方式传入 data={ "name":"germey", "age":22 } response=requests.get("http://httpbin.org/get",params=data) print(response.text)返回值如果为JSON的话,解析JSON
import requests import json reponse=requests.get("http://httpbin.org/get") print(reponse.text) print(reponse.json()) print(json.loads(reponse.text))获取二进制数据
import requests reponse=requests.get("http://github.com/favicon.ico") #获取文本数据 print(reponse.text) #获取二进制数据 print(reponse.content)下载图片
import requests reponse=requests.get("http://github.com/favicon.ico") with open("1.ico","wb") as f: f.write(reponse.content) f.close()添加headers
import requests from fake_useragent import UserAgent ua = UserAgent() headers={ "User-Agent":ua.random, } reponse=requests.get("https://www.zhihu.com/explore",headers=headers) print(reponse.text)基本的POST请求
import requests from fake_useragent import UserAgent ua = UserAgent() headers={ "User-Agent":ua.random, } data={ "name":"yhj", "age":12 } response=requests.post("http://httpbin.org/post",data=data) print(response.text) response=requests.post("http://httpbin.org/post",data=data,headers=headers) print(response.json())常用的Response属性
import requests response=requests.get("http://jianshu.com") print(type(response.status_code),response.status_code) print(type(response.headers),response.headers) print(type(response.cookies),response.cookies) print(type(response.url),response.url) print(type(response.history),response.history)状态码判断
import requests response=requests.get("http://jianshu.com") if not response.status_code==requests.codes.ok: exit() else: print("Request Succfully") if not response.status_code==200: exit() else: print("Request Succfully")文件上传
import requests files={ "file":open("1.ico","rb") } response=requests.post("http://httpbin.org/post",files=files) print(response.text)获取cookies
import requests response=requests.get("http://www.baidu.com") print(response.cookies) for key,value in response.cookies.items(): print(key+"="+value)会话维持
import requests s=requests.Session() s.get("http://httpbin.org/cookies/set/number/123456789") response=s.get("http://httpbin.org/cookies") print(response.text)证书验证 访问SSL证书错误的网站,返回SSL证书错误
import requests response=requests.get("https://www.12306.cn") #会返回SSL错误 print(response.status_code)设置取消SSL证书验证,并取消警告信息
import requests import urllib3 #设置取消警告信息 urllib3.disable_warnings() #设置不验证SSL证书 response=requests.get("https://www.12306.cn",verify=False) print(response.status_code)利用本地证书进行验证
import requests #设置本地证书 response=requests.get("https://www.12306.cn",cert=('/path/server.crt','/path/key')) print(response.status_code)设置代理
#_*_coding: utf-8_*_ import requests #设置代理 proxies={ "http":"http://120.25.253.234:8118" } response=requests.get("http://www.baidu.com",proxies=proxies) print(response.status_code)如果代理需要用户名和密码,实例如下:
#_*_coding: utf-8_*_ import requests #设置代理 proxies={ "http":"http://user:password@120.25.253.234:8118" } response=requests.get("http://www.baidu.com",proxies=proxies) print(response.status_code)超时设置
#_*_coding: utf-8_*_ import requests from requests.exceptions import ReadTimeout #超时设置 try: response=requests.get("http://www.baidu.com",timeout=1) print(response.status_code) except ReadTimeout: print("TIME OUT")异常处理举例
#_*_coding: utf-8_*_ import requests from requests.exceptions import ReadTimeout,HTTPError,RequestException try: response=requests.get("http://www.baidu.com",timeout=1) print(response.status_code) except ReadTimeout: print("TIME OUT") except HTTPError: print("HTTP error") except RequestException: print("error")