鍍金池/ 問(wèn)答
風(fēng)畔 回答

為了頭部尾部引入一個(gè)框架?

舊言 回答

!表示這個(gè)屬性不可能為空,使用是就可以不用?或者解包
var table: UITableView!
你像這樣聲明了的話(huà)那么你必須要在viewDidload或者初始化方法中初始這個(gè)table,否則你直接訪問(wèn)這個(gè)屬性會(huì)直接崩掉。

不歸路 回答
function tree($arr, $pid=0){
    foreach($arr as $k => $v){
        if($v['pid'] == $pid){
            $v['data'] = tree($arr, $v['id']);
            $tree[] = $v;
        }
    }
    return isset($tree)? $tree : [];
}
雨萌萌 回答

題主的問(wèn)題還是有點(diǎn)模糊的,希望能詳細(xì)一些。下面我談?wù)勛约旱睦斫獍伞?/p>

不知道我的理解對(duì)不對(duì):

  1. 說(shuō)的抽離樣式模塊,應(yīng)該指的是把模塊中的樣式單獨(dú)抽離成文件。
  2. 抽離后的樣式文件可使用@import '樣式文件路徑'引用。

但是呢,要是問(wèn)的是項(xiàng)目中抽離樣式模塊的話(huà),在webpack中可以使用:
extract-text-webpack-plugin插件抽離。

貓小柒 回答

試試這樣呢

ng-checked = "if(textCode == 'test'){checked = false;}"
涼薄 回答
已經(jīng)確定我的編碼是utf8

。。。。

mb_convert_encoding($homepage, "UTF-8","GBK")

是將編碼從GBK轉(zhuǎn)成UTF-8,轉(zhuǎn)換后正常輸出,說(shuō)明原先編碼絕不是UTF-8

下面才是UTF-8的輸出結(jié)果

clipboard.png

半心人 回答
? 字符
匹配前面一個(gè)表達(dá)式0次或者1次。等價(jià)于 {0,1}。

例如,/e?le?/ 匹配 "angel" 中的 'el',和 "angle" 中的 'le' 以及"oslo' 中的'l'。

如果緊跟在任何量詞 *、 +、? 或 {} 的后面,將會(huì)使量詞變?yōu)榉秦澙返模ㄆヅ浔M量少的字符),和缺省使用的貪婪模式(匹配盡可能多的字符)正好相反。

例如,對(duì) "123abc" 應(yīng)用 /d+/ 將會(huì)返回 "123",如果使用 /d+?/,那么就只會(huì)匹配到 "1"。

淺淺 回答

既然有導(dǎo)出了,那為何不在工程中用導(dǎo)入

且這個(gè)導(dǎo)出寫(xiě)得也不太對(duì)

maintenance.js中干脆就定義一個(gè)function,在所有js之前引用,整個(gè)window環(huán)境下都能調(diào)用了

所以推薦你:1.要么完全換成export和import的形式讓打包工具去打包。2.檢查script的順序并修正導(dǎo)出的寫(xiě)法。3.不如用個(gè)json來(lái)維護(hù)maintenance配置,異步請(qǐng)求也方便后臺(tái)熱更(寫(xiě)成js可能會(huì)出現(xiàn)緩存等問(wèn)題

九年囚 回答

import myScript.js 的地方路徑不對(duì)。具體怎么改還要你發(fā)上來(lái)看看

萢萢糖 回答

data.replace(/<p>&nbsp;</p>/g,'&nbsp;')

將<p>&nbsp;</p> 替換成 &nbsp;,思路是這樣,正則自己寫(xiě)下,標(biāo)簽 / 需要轉(zhuǎn)義 \/一下

練命 回答

只需要關(guān)閉最外層流,它自己會(huì)遞歸把里層的流關(guān)閉

糖果果 回答

我理解的你說(shuō)的緩存應(yīng)該是指瀏覽器對(duì)醫(yī)院文件的緩存。
緩存的配置是在Nginx或者后端的server上,例如緩存生效時(shí)間緩存失效日期等。如果需要的話(huà),任何通過(guò)GET請(qǐng)求獲取的資源文件理論上都可以配置緩存,是否生效也需要看瀏覽器的支持情況,建議去網(wǎng)上找相關(guān)的cache的文章~

柒喵 回答

id是主鍵,單查id會(huì)用到主鍵唯一索引,而且查1列跟查多列的速度肯定是不一樣的

我也遇到了 這個(gè)問(wèn)題,目前復(fù)制到了 支付寶好友聊天窗口 去執(zhí)行

落殤 回答

既然是同樣的數(shù)據(jù)在添加的時(shí)候?yàn)槭裁床?update 前面邏輯刪除的數(shù)據(jù)呢?

不討囍 回答

csdn上面的,直接搬了過(guò)來(lái):

因?yàn)橐鲇^點(diǎn),觀點(diǎn)的屋子類(lèi)似于知乎的話(huà)題,所以得想辦法把他給爬下來(lái),搞了半天最終還是妥妥的搞定了,代碼是python寫(xiě)的,不懂得麻煩自學(xué)哈!懂得直接看代碼,絕對(duì)可用


#coding:utf-8
"""
@author:haoning
@create time:2015.8.5
"""
from __future__ import division  # 精確除法
from Queue import Queue
from __builtin__ import False
import json
import os
import re
import platform
import uuid
import urllib
import urllib2
import sys
import time
import MySQLdb as mdb
from bs4 import BeautifulSoup


reload(sys)
sys.setdefaultencoding( "utf-8" )


headers = {
   'User-Agent' : 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:35.0) Gecko/20100101 Firefox/35.0',
   'Content-Type':'application/x-www-form-urlencoded; charset=UTF-8',
   'X-Requested-With':'XMLHttpRequest',
   'Referer':'https://www.zhihu.com/topics',
   'Cookie':'__utma=51854390.517069884.1416212035.1416212035.1416212035.1; q_c1=c02bf44d00d240798bfabcfc95baeb56|1455778173000|1416205243000; _za=b1c8ae35-f986-46a2-b24a-cb9359dc6b2a; aliyungf_tc=AQAAAJ1m71jL1woArKqF22VFnL/wRy6C; _xsrf=9d494558f9271340ab24598d85b2a3c8; cap_id="MDNiMjcwM2U0MTRhNDVmYjgxZWVhOWI0NTA2OGU5OTg=|1455864276|2a4ce8247ebd3c0df5393bb5661713ad9eec01dd"; n_c=1; _alicdn_sec=56c6ba4d556557d27a0f8c876f563d12a285f33a'
}


DB_HOST = '127.0.0.1'
DB_USER = 'root'
DB_PASS = 'root'


queue= Queue() #接收隊(duì)列
nodeSet=set()
keywordSet=set()
stop=0
offset=-20
level=0
maxLevel=7
counter=0
base=""


conn = mdb.connect(DB_HOST, DB_USER, DB_PASS, 'zhihu', charset='utf8')
conn.autocommit(False)
curr = conn.cursor()


def get_html(url):
    try:
        req = urllib2.Request(url)
        response = urllib2.urlopen(req,None,3) #在這里應(yīng)該加入代理
        html = response.read()
        return html
    except:
        pass
    return None


def getTopics():
    url = 'https://www.zhihu.com/topics'
    print url
    try:
        req = urllib2.Request(url)
        response = urllib2.urlopen(req) #鍦ㄨ繖閲屽簲璇ュ姞鍏ヤ唬鐞?
        html = response.read().decode('utf-8')
        print html
        soup = BeautifulSoup(html)
        lis = soup.find_all('li', {'class' : 'zm-topic-cat-item'})
        
        for li in lis:
            data_id=li.get('data-id')
            name=li.text
            curr.execute('select id from classify_new where name=%s',(name))
            y= curr.fetchone()
            if not y:
                curr.execute('INSERT INTO classify_new(data_id,name)VALUES(%s,%s)',(data_id,name))
        conn.commit()
    except Exception as e:
        print "get topic error",e
        


def get_extension(name):  
    where=name.rfind('.')
    if where!=-1:
        return name[where:len(name)]
    return None




def which_platform():
    sys_str = platform.system()
    return sys_str


def GetDateString():
    when=time.strftime('%Y-%m-%d',time.localtime(time.time()))
    foldername = str(when)
    return foldername 


def makeDateFolder(par,classify):
    try:
        if os.path.isdir(par):
            newFolderName=par + '//' + GetDateString() + '//'  +str(classify)
            if which_platform()=="Linux":
                newFolderName=par + '/' + GetDateString() + "/" +str(classify)
            if not os.path.isdir( newFolderName ):
                os.makedirs( newFolderName )
            return newFolderName
        else:
            return None 
    except Exception,e:
        print "kk",e
    return None 


def download_img(url,classify):
    try:
        extention=get_extension(url)
        if(extention is None):
            return None
        req = urllib2.Request(url)
        resp = urllib2.urlopen(req,None,3)
        dataimg=resp.read()
        name=str(uuid.uuid1()).replace("-","")+"_www.guandn.com"+extention
        top="E://topic_pic"
        folder=makeDateFolder(top, classify)
        filename=None
        if folder is not None:
            filename  =folder+"http://"+name
        try:
            if "e82bab09c_m" in str(url):
                return True
            if not os.path.exists(filename):
                file_object = open(filename,'w+b')
                file_object.write(dataimg)
                file_object.close()
                return '/room/default/'+GetDateString()+'/'+str(classify)+"/"+name
            else:
                print "file exist"
                return None
        except IOError,e1:
            print "e1=",e1
            pass
    except Exception as e:
        print "eee",e
        pass
    return None #如果沒(méi)有下載下來(lái)就利用原來(lái)網(wǎng)站的鏈接


def getChildren(node,name):
    global queue,nodeSet
    try:
        url="https://www.zhihu.com/topic/"+str(node)+"/hot"
        html=get_html(url)
        if html is None:
            return
        soup = BeautifulSoup(html)
        p_ch='父話(huà)題'
        node_name=soup.find('div', {'id' : 'zh-topic-title'}).find('h1').text
        topic_cla=soup.find('div', {'class' : 'child-topic'})
        if topic_cla is not None:
            try:
                p_ch=str(topic_cla.text)
                aList = soup.find_all('a', {'class' : 'zm-item-tag'}) #獲取所有子節(jié)點(diǎn)
                if u'子話(huà)題' in p_ch:
                    for a in aList:
                        token=a.get('data-token')
                        a=str(a).replace('\n','').replace('\t','').replace('\r','')
                        start=str(a).find('>')
                        end=str(a).rfind('</a>')
                        new_node=str(str(a)[start+1:end])
                        curr.execute('select id from rooms where name=%s',(new_node)) #先保證名字絕不相同
                        y= curr.fetchone()
                        if not y:
                            print "y=",y,"new_node=",new_node,"token=",token
                            queue.put((token,new_node,node_name))
            except Exception as e:
                print "add queue error",e
    except Exception as e:
        print "get html error",e
        
    


def getContent(n,name,p,top_id):
    try:
        global counter
        curr.execute('select id from rooms where name=%s',(name)) #先保證名字絕不相同
        y= curr.fetchone()
        print "exist?? ",y,"n=",n
        if not y:
            url="https://www.zhihu.com/topic/"+str(n)+"/hot"
            html=get_html(url)
            if html is None:
                return
            soup = BeautifulSoup(html)
            title=soup.find('div', {'id' : 'zh-topic-title'}).find('h1').text
            pic_path=soup.find('a',{'id':'zh-avartar-edit-form'}).find('img').get('src')
            description=soup.find('div',{'class':'zm-editable-content'})
            if description is not None:
                description=description.text
                
            if (u"未歸類(lèi)" in title or u"根話(huà)題" in title): #允許入庫(kù),避免死循環(huán)
                description=None
                
            tag_path=download_img(pic_path,top_id)
            print "tag_path=",tag_path
            if (tag_path is not None) or tag_path==True:
                if tag_path==True:
                    tag_path=None
                father_id=2 #默認(rèn)為雜談
                curr.execute('select id from rooms where name=%s',(p))
                results = curr.fetchall()
                for r in results:
                    father_id=r[0]
                name=title
                curr.execute('select id from rooms where name=%s',(name)) #先保證名字絕不相同
                y= curr.fetchone()
                print "store see..",y
                if not y:
                    friends_num=0
                    temp = time.time()
                    x = time.localtime(float(temp))
                    create_time = time.strftime("%Y-%m-%d %H:%M:%S",x) # get time now
                    create_time
                    creater_id=None
                    room_avatar=tag_path
                    is_pass=1
                    has_index=0
                    reason_id=None  
                    #print father_id,name,friends_num,create_time,creater_id,room_avatar,is_pass,has_index,reason_id
                    ######################有資格入庫(kù)的內(nèi)容
                    counter=counter+1
                    curr.execute("INSERT INTO rooms(father_id,name,friends_num,description,create_time,creater_id,room_avatar,is_pass,has_index,reason_id)VALUES(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)",(father_id,name,friends_num,description,create_time,creater_id,room_avatar,is_pass,has_index,reason_id))
                    conn.commit() #必須時(shí)時(shí)進(jìn)入數(shù)據(jù)庫(kù),不然找不到父節(jié)點(diǎn)
                    if counter % 200==0:
                        print "current node",name,"num",counter
    except Exception as e:
        print "get content error",e       


def work():
    global queue
    curr.execute('select id,node,parent,name from classify where status=1')
    results = curr.fetchall()
    for r in results:
        top_id=r[0]
        node=r[1]
        parent=r[2]
        name=r[3]
        try:
            queue.put((node,name,parent)) #首先放入隊(duì)列
            while queue.qsize() >0:
                n,p=queue.get() #頂節(jié)點(diǎn)出隊(duì)
                getContent(n,p,top_id)
                getChildren(n,name) #出隊(duì)內(nèi)容的子節(jié)點(diǎn)
            conn.commit()
        except Exception as e:
            print "what's wrong",e  
            
def new_work():
    global queue
    curr.execute('select id,data_id,name from classify_new_copy where status=1')
    results = curr.fetchall()
    for r in results:
        top_id=r[0]
        data_id=r[1]
        name=r[2]
        try:
            get_topis(data_id,name,top_id)
        except:
            pass




def get_topis(data_id,name,top_id):
    global queue
    url = 'https://www.zhihu.com/node/TopicsPlazzaListV2'
    isGet = True;
    offset = -20;
    data_id=str(data_id)
    while isGet:
        offset = offset + 20
        values = {'method': 'next', 'params': '{"topic_id":'+data_id+',"offset":'+str(offset)+',"hash_id":""}'}
        try:
            msg=None
            try:
                data = urllib.urlencode(values)
                request = urllib2.Request(url,data,headers)
                response = urllib2.urlopen(request,None,5)
                html=response.read().decode('utf-8')
                json_str = json.loads(html)
                ms=json_str['msg']
                if len(ms) <5:
                    break
                msg=ms[0]
            except Exception as e:
                print "eeeee",e
            #print msg
            if msg is not None:
                soup = BeautifulSoup(str(msg))
                blks = soup.find_all('div', {'class' : 'blk'})
                for blk in blks:
                    page=blk.find('a').get('href')
                    if page is not None:
                        node=page.replace("/topic/","") #將更多的種子入庫(kù)
                        parent=name
                        ne=blk.find('strong').text
                        try:
                            queue.put((node,ne,parent)) #首先放入隊(duì)列
                            while queue.qsize() >0:
                                n,name,p=queue.get() #頂節(jié)點(diǎn)出隊(duì)
                                size=queue.qsize()
                                if size > 0:
                                    print size
                                getContent(n,name,p,top_id)
                                getChildren(n,name) #出隊(duì)內(nèi)容的子節(jié)點(diǎn)
                            conn.commit()
                        except Exception as e:
                            print "what's wrong",e  
        except urllib2.URLError, e:
            print "error is",e
            pass 
            
        
if __name__ == '__main__':
    i=0
    while i<400:
        new_work()
        i=i+1

說(shuō)下數(shù)據(jù)庫(kù)的問(wèn)題,我這里就不傳附件了,看字段自己建立,因?yàn)檫@確實(shí)太簡(jiǎn)單了,我是用的mysql,你看自己的需求自己建。

有什么不懂得麻煩去去轉(zhuǎn)盤(pán)網(wǎng)找我,因?yàn)檫@個(gè)也是我開(kāi)發(fā)的,上面會(huì)及時(shí)更新qq群號(hào),這里不留qq號(hào)啥的,以免被系統(tǒng)給K了。

解夏 回答

同問(wèn),也遇到這個(gè)問(wèn)題!

喜歡你 回答

主要是因?yàn)榍昂蠖薱ontent-type設(shè)置不一致引起的;
在第三個(gè)參數(shù)中設(shè)置
headers: new HttpHeaders({'Content-Type': 'application/x-www-form-urlencoded'})
可以解決此問(wèn)題

鐧簞噯 回答

clipboard.png
看下百度翻譯,劃詞的方案,直接段落的貌似有點(diǎn)難度,得分詞,或分子,切成標(biāo)簽嵌套
http://fanyi.baidu.com/?aldty...