91在线一级黄片|91视频在线观看18|成人夜间呦呦网站|91资源欧美日韩超碰|久久最新免费精品视频一区二区三区|国产探花视频在线观看|黄片真人免费三级片毛片|国产人无码视频在线|精品成人影视无码三区|久久视频爱久久免费精品

RELATEED CONSULTING
相關咨詢
選擇下列產(chǎn)品馬上在線溝通
服務時間:8:30-17:00
你可能遇到了下面的問題
關閉右側(cè)工具欄

新聞中心

這里有您想知道的互聯(lián)網(wǎng)營銷解決方案
創(chuàng)新互聯(lián)Python教程:日志操作手冊

日志操作手冊

作者

創(chuàng)新互聯(lián)公司專注為客戶提供全方位的互聯(lián)網(wǎng)綜合服務,包含不限于網(wǎng)站建設、成都網(wǎng)站設計、北流網(wǎng)絡推廣、微信平臺小程序開發(fā)、北流網(wǎng)絡營銷、北流企業(yè)策劃、北流品牌公關、搜索引擎seo、人物專訪、企業(yè)宣傳片、企業(yè)代運營等,從售前售中售后,我們都將竭誠為您服務,您的肯定,是我們最大的嘉獎;創(chuàng)新互聯(lián)公司為所有大學生創(chuàng)業(yè)者提供北流建站搭建服務,24小時服務熱線:18980820575,官方網(wǎng)址:www.cdcxhl.com

Vinay Sajip

This page contains a number of recipes related to logging, which have been found useful in the past. For links to tutorial and reference information, please see 其他資源.

在多個模塊中記錄日志

無論對 logging.getLogger('someLogger') 進行多少次調(diào)用,都會返回同一個 logger 對象的引用。不僅在同一個模塊內(nèi)如此,只要是在同一個 python 解釋器進程中,跨模塊調(diào)用也是一樣。同樣是引用同一個對象,應用程序也可以在一個模塊中定義和配置一個父 logger,而在另一個單獨的模塊中創(chuàng)建(但不配置)子 logger,對于子 logger 的所有調(diào)用都會傳給父 logger。以下是主模塊:

 
 
 
 
  1. import logging
  2. import auxiliary_module
  3. # create logger with 'spam_application'
  4. logger = logging.getLogger('spam_application')
  5. logger.setLevel(logging.DEBUG)
  6. # create file handler which logs even debug messages
  7. fh = logging.FileHandler('spam.log')
  8. fh.setLevel(logging.DEBUG)
  9. # create console handler with a higher log level
  10. ch = logging.StreamHandler()
  11. ch.setLevel(logging.ERROR)
  12. # create formatter and add it to the handlers
  13. formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
  14. fh.setFormatter(formatter)
  15. ch.setFormatter(formatter)
  16. # add the handlers to the logger
  17. logger.addHandler(fh)
  18. logger.addHandler(ch)
  19. logger.info('creating an instance of auxiliary_module.Auxiliary')
  20. a = auxiliary_module.Auxiliary()
  21. logger.info('created an instance of auxiliary_module.Auxiliary')
  22. logger.info('calling auxiliary_module.Auxiliary.do_something')
  23. a.do_something()
  24. logger.info('finished auxiliary_module.Auxiliary.do_something')
  25. logger.info('calling auxiliary_module.some_function()')
  26. auxiliary_module.some_function()
  27. logger.info('done with auxiliary_module.some_function()')

以下是輔助模塊:

 
 
 
 
  1. import logging
  2. # create logger
  3. module_logger = logging.getLogger('spam_application.auxiliary')
  4. class Auxiliary:
  5. def __init__(self):
  6. self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
  7. self.logger.info('creating an instance of Auxiliary')
  8. def do_something(self):
  9. self.logger.info('doing something')
  10. a = 1 + 1
  11. self.logger.info('done doing something')
  12. def some_function():
  13. module_logger.info('received a call to "some_function"')

輸出結(jié)果會像這樣:

 
 
 
 
  1. 2005-03-23 23:47:11,663 - spam_application - INFO -
  2. creating an instance of auxiliary_module.Auxiliary
  3. 2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
  4. creating an instance of Auxiliary
  5. 2005-03-23 23:47:11,665 - spam_application - INFO -
  6. created an instance of auxiliary_module.Auxiliary
  7. 2005-03-23 23:47:11,668 - spam_application - INFO -
  8. calling auxiliary_module.Auxiliary.do_something
  9. 2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
  10. doing something
  11. 2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
  12. done doing something
  13. 2005-03-23 23:47:11,670 - spam_application - INFO -
  14. finished auxiliary_module.Auxiliary.do_something
  15. 2005-03-23 23:47:11,671 - spam_application - INFO -
  16. calling auxiliary_module.some_function()
  17. 2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
  18. received a call to 'some_function'
  19. 2005-03-23 23:47:11,673 - spam_application - INFO -
  20. done with auxiliary_module.some_function()

在多個線程中記錄日志

多線程記錄日志并不需要特殊處理,以下示例演示了在主線程(起始線程)和其他線程中記錄日志的過程:

 
 
 
 
  1. import logging
  2. import threading
  3. import time
  4. def worker(arg):
  5. while not arg['stop']:
  6. logging.debug('Hi from myfunc')
  7. time.sleep(0.5)
  8. def main():
  9. logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
  10. info = {'stop': False}
  11. thread = threading.Thread(target=worker, args=(info,))
  12. thread.start()
  13. while True:
  14. try:
  15. logging.debug('Hello from main')
  16. time.sleep(0.75)
  17. except KeyboardInterrupt:
  18. info['stop'] = True
  19. break
  20. thread.join()
  21. if __name__ == '__main__':
  22. main()

腳本會運行輸出類似下面的內(nèi)容:

 
 
 
 
  1. 0 Thread-1 Hi from myfunc
  2. 3 MainThread Hello from main
  3. 505 Thread-1 Hi from myfunc
  4. 755 MainThread Hello from main
  5. 1007 Thread-1 Hi from myfunc
  6. 1507 MainThread Hello from main
  7. 1508 Thread-1 Hi from myfunc
  8. 2010 Thread-1 Hi from myfunc
  9. 2258 MainThread Hello from main
  10. 2512 Thread-1 Hi from myfunc
  11. 3009 MainThread Hello from main
  12. 3013 Thread-1 Hi from myfunc
  13. 3515 Thread-1 Hi from myfunc
  14. 3761 MainThread Hello from main
  15. 4017 Thread-1 Hi from myfunc
  16. 4513 MainThread Hello from main
  17. 4518 Thread-1 Hi from myfunc

以上如期顯示了不同線程的日志是交替輸出的。當然更多的線程也會如此。

多個 handler 和多種 formatter

日志是個普通的 Python 對象。 addHandler() 方法可加入不限數(shù)量的日志 handler。有時候,應用程序需把嚴重錯誤信息記入文本文件,而將一般錯誤或其他級別的信息輸出到控制臺。若要進行這樣的設定,只需多配置幾個日志 handler 即可,應用程序的日志調(diào)用代碼可以保持不變。下面對之前的分模塊日志示例略做修改:

 
 
 
 
  1. import logging
  2. logger = logging.getLogger('simple_example')
  3. logger.setLevel(logging.DEBUG)
  4. # create file handler which logs even debug messages
  5. fh = logging.FileHandler('spam.log')
  6. fh.setLevel(logging.DEBUG)
  7. # create console handler with a higher log level
  8. ch = logging.StreamHandler()
  9. ch.setLevel(logging.ERROR)
  10. # create formatter and add it to the handlers
  11. formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
  12. ch.setFormatter(formatter)
  13. fh.setFormatter(formatter)
  14. # add the handlers to logger
  15. logger.addHandler(ch)
  16. logger.addHandler(fh)
  17. # 'application' code
  18. logger.debug('debug message')
  19. logger.info('info message')
  20. logger.warning('warn message')
  21. logger.error('error message')
  22. logger.critical('critical message')

需要注意的是,“應用程序”內(nèi)的代碼并不關心是否存在多個日志 handler。示例中所做的改變,只是新加入并配置了一個名為 fh 的 handler。

在編寫和測試應用程序時,若能創(chuàng)建日志 handler 對不同嚴重級別的日志信息進行過濾,這將十分有用。調(diào)試時無需用多條 print 語句,而是采用 logger.debug :print 語句以后還得注釋或刪掉,而 logger.debug 語句可以原樣留在源碼中保持靜默。當需要再次調(diào)試時,只要改變?nèi)罩緦ο蠡?handler 的嚴重級別即可。

在多個地方記錄日志

假定要根據(jù)不同的情況將日志以不同的格式寫入控制臺和文件。比如把 DEBUG 以上級別的日志信息寫于文件,并且把 INFO 以上的日志信息輸出到控制臺。再假設日志文件需要包含時間戳,控制臺信息則不需要。以下演示了做法:

 
 
 
 
  1. import logging
  2. # set up logging to file - see previous section for more details
  3. logging.basicConfig(level=logging.DEBUG,
  4. format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
  5. datefmt='%m-%d %H:%M',
  6. filename='/tmp/myapp.log',
  7. filemode='w')
  8. # define a Handler which writes INFO messages or higher to the sys.stderr
  9. console = logging.StreamHandler()
  10. console.setLevel(logging.INFO)
  11. # set a format which is simpler for console use
  12. formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
  13. # tell the handler to use this format
  14. console.setFormatter(formatter)
  15. # add the handler to the root logger
  16. logging.getLogger('').addHandler(console)
  17. # Now, we can log to the root logger, or any other logger. First the root...
  18. logging.info('Jackdaws love my big sphinx of quartz.')
  19. # Now, define a couple of other loggers which might represent areas in your
  20. # application:
  21. logger1 = logging.getLogger('myapp.area1')
  22. logger2 = logging.getLogger('myapp.area2')
  23. logger1.debug('Quick zephyrs blow, vexing daft Jim.')
  24. logger1.info('How quickly daft jumping zebras vex.')
  25. logger2.warning('Jail zesty vixen who grabbed pay from quack.')
  26. logger2.error('The five boxing wizards jump quickly.')

當運行后,你會看到控制臺如下所示

 
 
 
 
  1. root : INFO Jackdaws love my big sphinx of quartz.
  2. myapp.area1 : INFO How quickly daft jumping zebras vex.
  3. myapp.area2 : WARNING Jail zesty vixen who grabbed pay from quack.
  4. myapp.area2 : ERROR The five boxing wizards jump quickly.

而日志文件將如下所示:

 
 
 
 
  1. 10-22 22:19 root INFO Jackdaws love my big sphinx of quartz.
  2. 10-22 22:19 myapp.area1 DEBUG Quick zephyrs blow, vexing daft Jim.
  3. 10-22 22:19 myapp.area1 INFO How quickly daft jumping zebras vex.
  4. 10-22 22:19 myapp.area2 WARNING Jail zesty vixen who grabbed pay from quack.
  5. 10-22 22:19 myapp.area2 ERROR The five boxing wizards jump quickly.

如您所見,DEBUG 級別的日志信息只出現(xiàn)在了文件中,而其他信息則兩個地方都會輸出。

上述示例只用到了控制臺和文件 handler,當然還可以自由組合任意數(shù)量的日志 handler。

Note that the above choice of log filename /tmp/myapp.log implies use of a standard location for temporary files on POSIX systems. On Windows, you may need to choose a different directory name for the log - just ensure that the directory exists and that you have the permissions to create and update files in it.

Custom handling of levels

Sometimes, you might want to do something slightly different from the standard handling of levels in handlers, where all levels above a threshold get processed by a handler. To do this, you need to use filters. Let’s look at a scenario where you want to arrange things as follows:

  • Send messages of severity INFO and WARNING to sys.stdout

  • Send messages of severity ERROR and above to sys.stderr

  • Send messages of severity DEBUG and above to file app.log

Suppose you configure logging with the following JSON:

 
 
 
 
  1. {
  2. "version": 1,
  3. "disable_existing_loggers": false,
  4. "formatters": {
  5. "simple": {
  6. "format": "%(levelname)-8s - %(message)s"
  7. }
  8. },
  9. "handlers": {
  10. "stdout": {
  11. "class": "logging.StreamHandler",
  12. "level": "INFO",
  13. "formatter": "simple",
  14. "stream": "ext://sys.stdout",
  15. },
  16. "stderr": {
  17. "class": "logging.StreamHandler",
  18. "level": "ERROR",
  19. "formatter": "simple",
  20. "stream": "ext://sys.stderr"
  21. },
  22. "file": {
  23. "class": "logging.FileHandler",
  24. "formatter": "simple",
  25. "filename": "app.log",
  26. "mode": "w"
  27. }
  28. },
  29. "root": {
  30. "level": "DEBUG",
  31. "handlers": [
  32. "stderr",
  33. "stdout",
  34. "file"
  35. ]
  36. }
  37. }

This configuration does almost what we want, except that sys.stdout would show messages of severity ERROR and above as well as INFO and WARNING messages. To prevent this, we can set up a filter which excludes those messages and add it to the relevant handler. This can be configured by adding a filters section parallel to formatters and handlers:

 
 
 
 
  1. "filters": {
  2. "warnings_and_below": {
  3. "()" : "__main__.filter_maker",
  4. "level": "WARNING"
  5. }
  6. }

and changing the section on the stdout handler to add it:

 
 
 
 
  1. "stdout": {
  2. "class": "logging.StreamHandler",
  3. "level": "INFO",
  4. "formatter": "simple",
  5. "stream": "ext://sys.stdout",
  6. "filters": ["warnings_and_below"]
  7. }

A filter is just a function, so we can define the filter_maker (a factory function) as follows:

 
 
 
 
  1. def filter_maker(level):
  2. level = getattr(logging, level)
  3. def filter(record):
  4. return record.levelno <= level
  5. return filter

This converts the string argument passed in to a numeric level, and returns a function which only returns True if the level of the passed in record is at or below the specified level. Note that in this example I have defined the filter_maker in a test script main.py that I run from the command line, so its module will be __main__ - hence the __main__.filter_maker in the filter configuration. You will need to change that if you define it in a different module.

With the filter added, we can run main.py, which in full is:

 
 
 
 
  1. import json
  2. import logging
  3. import logging.config
  4. CONFIG = '''
  5. {
  6. "version": 1,
  7. "disable_existing_loggers": false,
  8. "formatters": {
  9. "simple": {
  10. "format": "%(levelname)-8s - %(message)s"
  11. }
  12. },
  13. "filters": {
  14. "warnings_and_below": {
  15. "()" : "__main__.filter_maker",
  16. "level": "WARNING"
  17. }
  18. },
  19. "handlers": {
  20. "stdout": {
  21. "class": "logging.StreamHandler",
  22. "level": "INFO",
  23. "formatter": "simple",
  24. "stream": "ext://sys.stdout",
  25. "filters": ["warnings_and_below"]
  26. },
  27. "stderr": {
  28. "class": "logging.StreamHandler",
  29. "level": "ERROR",
  30. "formatter": "simple",
  31. "stream": "ext://sys.stderr"
  32. },
  33. "file": {
  34. "class": "logging.FileHandler",
  35. "formatter": "simple",
  36. "filename": "app.log",
  37. "mode": "w"
  38. }
  39. },
  40. "root": {
  41. "level": "DEBUG",
  42. "handlers": [
  43. "stderr",
  44. "stdout",
  45. "file"
  46. ]
  47. }
  48. }
  49. '''
  50. def filter_maker(level):
  51. level = getattr(logging, level)
  52. def filter(record):
  53. return record.levelno <= level
  54. return filter
  55. logging.config.dictConfig(json.loads(CONFIG))
  56. logging.debug('A DEBUG message')
  57. logging.info('An INFO message')
  58. logging.warning('A WARNING message')
  59. logging.error('An ERROR message')
  60. logging.critical('A CRITICAL message')

And after running it like this:

 
 
 
 
  1. python main.py 2>stderr.log >stdout.log

We can see the results are as expected:

 
 
 
 
  1. $ more *.log
  2. ::::::::::::::
  3. app.log
  4. ::::::::::::::
  5. DEBUG - A DEBUG message
  6. INFO - An INFO message
  7. WARNING - A WARNING message
  8. ERROR - An ERROR message
  9. CRITICAL - A CRITICAL message
  10. ::::::::::::::
  11. stderr.log
  12. ::::::::::::::
  13. ERROR - An ERROR message
  14. CRITICAL - A CRITICAL message
  15. ::::::::::::::
  16. stdout.log
  17. ::::::::::::::
  18. INFO - An INFO message
  19. WARNING - A WARNING message

日志配置服務器示例

以下是一個用到了日志配置服務器的模塊示例:

 
 
 
 
  1. import logging
  2. import logging.config
  3. import time
  4. import os
  5. # read initial config file
  6. logging.config.fileConfig('logging.conf')
  7. # create and start listener on port 9999
  8. t = logging.config.listen(9999)
  9. t.start()
  10. logger = logging.getLogger('simpleExample')
  11. try:
  12. # loop through logging calls to see the difference
  13. # new configurations make, until Ctrl+C is pressed
  14. while True:
  15. logger.debug('debug message')
  16. logger.info('info message')
  17. logger.warning('warn message')
  18. logger.error('error message')
  19. logger.critical('critical message')
  20. time.sleep(5)
  21. except KeyboardInterrupt:
  22. # cleanup
  23. logging.config.stopListening()
  24. t.join()

以下腳本將接受文件名作為參數(shù),然后將此文件發(fā)送到服務器,前面加上文件的二進制編碼長度,做為新的日志配置:

 
 
 
 
  1. #!/usr/bin/env python
  2. import socket, sys, struct
  3. with open(sys.argv[1], 'rb') as f:
  4. data_to_send = f.read()
  5. HOST = 'localhost'
  6. PORT = 9999
  7. s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
  8. print('connecting...')
  9. s.connect((HOST, PORT))
  10. print('sending config...')
  11. s.send(struct.pack('>L', len(data_to_send)))
  12. s.send(data_to_send)
  13. s.close()
  14. print('complete')

處理日志 handler 的阻塞

有時你必須讓日志記錄處理程序的運行不會阻塞你要記錄日志的線程。 這在 Web 應用程序中是很常見,當然在其他場景中也可能發(fā)生。

有一種原因往往會讓程序表現(xiàn)遲鈍,這就是 SMTPHandler:由于很多因素是開發(fā)人員無法控制的(例如郵件或網(wǎng)絡基礎設施的性能不佳),發(fā)送電子郵件可能需要很長時間。不過幾乎所有網(wǎng)絡 handler 都可能會發(fā)生阻塞:即使是 SocketHandler 操作也可能在后臺執(zhí)行 DNS 查詢,而這種查詢實在太慢了(并且 DNS 查詢還可能在很底層的套接字庫代碼中,位于 Python 層之下,超出了可控范圍)。

有一種解決方案是分成兩部分實現(xiàn)。第一部分,針對那些對性能有要求的關鍵線程,只為日志對象連接一個 QueueHandler。日志對象只需簡單地寫入隊列即可,可為隊列設置足夠大的容量,或者可以在初始化時不設置容量上限。盡管為以防萬一,可能需要在代碼中捕獲 queue.Full 異常,不過隊列寫入操作通常會很快得以處理。如果要開發(fā)庫代碼,包含性能要求較高的線程,為了讓使用該庫的開發(fā)人員受益,請務必在開發(fā)文檔中進行標明(包括建議僅連接 QueueHandlers )。

解決方案的另一部分就是 QueueListener,它被設計為 QueueHandler 的對應部分。QueueListener 非常簡單:傳入一個隊列和一些 handler,并啟動一個內(nèi)部線程,用于偵聽 QueueHandlers``(或其他 ``LogRecords 源)發(fā)送的 LogRecord 隊列。LogRecords 會從隊列中移除并傳給 handler 處理。

QueueListener 作為單獨的類,好處就是可以用同一個實例為多個 QueueHandlers 服務。這比把現(xiàn)有 handler 類線程化更加資源友好,后者會每個 handler 會占用一個線程,卻沒有特別的好處。

以下是這兩個類的運用示例(省略了 import 語句):

 
 
 
 
  1. que = queue.Queue(-1) # no limit on size
  2. queue_handler = QueueHandler(que)
  3. handler = logging.StreamHandler()
  4. listener = QueueListener(que, handler)
  5. root = logging.getLogger()
  6. root.addHandler(queue_handler)
  7. formatter = logging.Formatter('%(threadName)s: %(message)s')
  8. handler.setFormatter(formatter)
  9. listener.start()
  10. # The log output will display the thread which generated
  11. # the event (the main thread) rather than the internal
  12. # thread which monitors the internal queue. This is what
  13. # you want to happen.
  14. root.warning('Look out!')
  15. listener.stop()

在運行后會產(chǎn)生:

 
 
 
 
  1. MainThread: Look out!

備注

Although the earlier discussion wasn’t specifically talking about async code, but rather about slow logging handlers, it should be noted that when logging from async code, network and even file handlers could lead to problems (blocking the event loop) because some logging is done from asyncio internals. It might be best, if any async code is used in an application, to use the above approach for logging, so that any blocking code runs only in the QueueListener thread.

在 3.5 版更改: 在 Python 3.5 之前,QueueListener 總會把由隊列接收到的每條信息都傳遞給已初始化的每個處理程序。(因為這里假定級別過濾操作已在寫入隊列時完成了。)從 3.5 版開始,可以修改這種處理方式,只要將關鍵字參數(shù) respect_handler_level=True 傳給偵聽器的構(gòu)造函數(shù)即可。這樣偵聽器將會把每條信息的級別與 handler 的級別進行比較,只在適配時才會將信息傳給 handler 。

通過網(wǎng)絡收發(fā)日志事件

假定現(xiàn)在要通過網(wǎng)絡發(fā)送日志事件,并在接收端進行處理。有一種簡單的方案,就是在發(fā)送端的根日志對象連接一個 SocketHandler 實例:

 
 
 
 
  1. import logging, logging.handlers
  2. rootLogger = logging.getLogger('')
  3. rootLogger.setLevel(logging.DEBUG)
  4. socketHandler = logging.handlers.SocketHandler('localhost',
  5. logging.handlers.DEFAULT_TCP_LOGGING_PORT)
  6. # don't bother with a formatter, since a socket handler sends the event as
  7. # an unformatted pickle
  8. rootLogger.addHandler(socketHandler)
  9. # Now, we can log to the root logger, or any other logger. First the root...
  10. logging.info('Jackdaws love my big sphinx of quartz.')
  11. # Now, define a couple of other loggers which might represent areas in your
  12. # application:
  13. logger1 = logging.getLogger('myapp.area1')
  14. logger2 = logging.getLogger('myapp.area2')
  15. logger1.debug('Quick zephyrs blow, vexing daft Jim.')
  16. logger1.info('How quickly daft jumping zebras vex.')
  17. logger2.warning('Jail zesty vixen who grabbed pay from quack.')
  18. logger2.error('The five boxing wizards jump quickly.')

在接收端,可以用 socketserver 模塊設置一個接收器。簡要示例如下:

 
 
 
 
  1. import pickle
  2. import logging
  3. import logging.handlers
  4. import socketserver
  5. import struct
  6. class LogRecordStreamHandler(socketserver.StreamRequestHandler):
  7. """Handler for a streaming logging request.
  8. This basically logs the record using whatever logging policy is
  9. configured locally.
  10. """
  11. def handle(self):
  12. """
  13. Handle multiple requests - each expected to be a 4-byte length,
  14. followed by the LogRecord in pickle format. Logs the record
  15. according to whatever policy is configured locally.
  16. """
  17. while True:
  18. chunk = self.connection.recv(4)
  19. if len(chunk) < 4:
  20. break
  21. slen = struct.unpack('>L', chunk)[0]
  22. chunk = self.connection.recv(slen)
  23. while len(chunk) < slen:
  24. chunk = chunk + self.connection.recv(slen - len(chunk))
  25. obj = self.unPickle(chunk)
  26. record = logging.makeLogRecord(obj)
  27. self.handleLogRecord(record)
  28. def unPickle(self, data):
  29. return pickle.loads(data)
  30. def handleLogRecord(self, record):
  31. # if a name is specified, we use the named logger rather than the one
  32. # implied by the record.
  33. if self.server.logname is not None:
  34. name = self.server.logname
  35. else:
  36. name = record.name
  37. logger = logging.getLogger(name)
  38. # N.B. EVERY record gets logged. This is because Logger.handle
  39. # is normally called AFTER logger-level filtering. If you want
  40. # to do filtering, do it at the client end to save wasting
  41. # cycles and network bandwidth!
  42. logger.handle(record)
  43. class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
  44. """
  45. Simple TCP socket-based logging receiver suitable for testing.
  46. """
  47. allow_reuse_address = True
  48. def __init__ 本文題目:創(chuàng)新互聯(lián)Python教程:日志操作手冊
    網(wǎng)站URL:http://m.jiaoqi3.com/article/cdhigcc.html