cs231n作业:Assignment2-Fully-Connected Neural Nets_class twolayernet(object): """ 采用模块化设计实现具有relu和sof-程序员宅基地

技术标签: cs231n  

fc_net.py

from builtins import range
from builtins import object
import numpy as np

from cs231n.layers import *
from cs231n.layer_utils import *

class TwoLayerNet(object):
    """
    A two-layer fully-connected neural network with ReLU nonlinearity and
    softmax loss that uses a modular layer design. We assume an input dimension
    of D, a hidden dimension of H, and perform classification over C classes.

    The architecure should be affine - relu - affine - softmax.

    Note that this class does not implement gradient descent; instead, it
    will interact with a separate Solver object that is responsible for running
    optimization.

    The learnable parameters of the model are stored in the dictionary
    self.params that maps parameter names to numpy arrays.
    """

    def __init__(self, input_dim=3*32*32, hidden_dim=100, num_classes=10,
                 weight_scale=1e-3, reg=0.0):
        """
        Initialize a new network.

        Inputs:
        - input_dim: An integer giving the size of the input
        - hidden_dim: An integer giving the size of the hidden layer
        - num_classes: An integer giving the number of classes to classify
        - weight_scale: Scalar giving the standard deviation for random
          initialization of the weights.
        - reg: Scalar giving L2 regularization strength.
        """
        self.params = {
    }
        self.reg = reg

        ############################################################################
        # TODO: Initialize the weights and biases of the two-layer net. Weights    #
        # should be initialized from a Gaussian centered at 0.0 with               #
        # standard deviation equal to weight_scale, and biases should be           #
        # initialized to zero. All weights and biases should be stored in the      #
        # dictionary self.params, with first layer weights                         #
        # and biases using the keys 'W1' and 'b1' and second layer                 #
        # weights and biases using the keys 'W2' and 'b2'.                         #
        ############################################################################
        #w高斯分布,b初始化0
        mu = 0
        sigma = weight_scale
        self.params['W1'] = mu + sigma * np.random.randn(input_dim,hidden_dim)

        self.params['b1'] = np.zeros(hidden_dim)

        self.params['W2'] = mu + sigma * np.random.randn(hidden_dim,num_classes)
        self.params['b2'] = np.zeros(num_classes)

        pass
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################


    def loss(self, X, y=None):
        """
        Compute loss and gradient for a minibatch of data.

        Inputs:
        - X: Array of input data of shape (N, d_1, ..., d_k)
        - y: Array of labels, of shape (N,). y[i] gives the label for X[i].

        Returns:
        If y is None, then run a test-time forward pass of the model and return:
        - scores: Array of shape (N, C) giving classification scores, where
          scores[i, c] is the classification score for X[i] and class c.

        If y is not None, then run a training-time forward and backward pass and
        return a tuple of:
        - loss: Scalar value giving the loss
        - grads: Dictionary with the same keys as self.params, mapping parameter
          names to gradients of the loss with respect to those parameters.
        """
        scores = None
        ############################################################################
        # TODO: Implement the forward pass for the two-layer net, computing the    #
        # class scores for X and storing them in the scores variable.              #
        ############################################################################
        N = X.shape[0]
        D = np.prod(X.shape[1:])
        x_in = X
        x_in = x_in.reshape(N, D)
        fc1 = x_in.dot(self.params['W1']) + self.params['b1']
        relu = np.maximum(0, fc1)
        fc2 = relu.dot(self.params['W2']) + self.params['b2']
        scores = fc2
        pass
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        # If y is None then we are in test mode so just return scores
        if y is None:
            return scores

        loss, grads = 0, {
    }
        ############################################################################
        # TODO: Implement the backward pass for the two-layer net. Store the loss  #
        # in the loss variable and gradients in the grads dictionary. Compute data #
        # loss using softmax, and make sure that grads[k] holds the gradients for  #
        # self.params[k]. Don't forget to add L2 regularization!                   #
        #                                                                          #
        # NOTE: To ensure that your implementation matches ours and you pass the   #
        # automated tests, make sure that your L2 regularization includes a factor #
        # of 0.5 to simplify the expression for the gradient.                      #
        ############################################################################

        W1, b1 = self.params['W1'], self.params['b1']
        W2, b2 = self.params['W2'], self.params['b2']

        loss, dout2 = softmax_loss(scores, y)
        loss += 0.5 * self.reg * (np.sum(W1*W1)+np.sum(W2*W2))

        cache2 = relu, W2, b2
        dx2,dw2,db2 = affine_backward(dout2, cache2)
        grads['W2'] = dw2 + self.reg * self.params['W2'] #注意不要忘记 loss 中W1, W2 的贡献 和正则化参数
        grads['b2'] = db2

        cache1 = fc1
        dout1 = dx2
        dout =  relu_backward(dout1, cache1)

        cache = X, W1, b1
        dx1,dw1,db1 = affine_backward(dout, cache)
        grads['W1'] = dw1 + self.reg * self.params['W1']
        grads['b1'] = db1
        pass
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        return loss, grads




class FullyConnectedNet(object):
    """
    A fully-connected neural network with an arbitrary number of hidden layers,
    ReLU nonlinearities, and a softmax loss function. This will also implement
    dropout and batch/layer normalization as options. For a network with L layers,
    the architecture will be

    {affine - [batch/layer norm] - relu - [dropout]} x (L - 1) - affine - softmax

    where batch/layer normalization and dropout are optional, and the {...} block is
    repeated L - 1 times.

    Similar to the TwoLayerNet above, learnable parameters are stored in the
    self.params dictionary and will be learned using the Solver class.
    """

    def __init__(self, hidden_dims, input_dim=3*32*32, num_classes=10,
                 dropout=1, normalization=None, reg=0.0,
                 weight_scale=1e-2, dtype=np.float32, seed=None):
        """
        Initialize a new FullyConnectedNet.

        Inputs:
        - hidden_dims: A list of integers giving the size of each hidden layer.
        - input_dim: An integer giving the size of the input.
        - num_classes: An integer giving the number of classes to classify.
        - dropout: Scalar between 0 and 1 giving dropout strength. If dropout=1 then
          the network should not use dropout at all.
        - normalization: What type of normalization the network should use. Valid values
          are "batchnorm", "layernorm", or None for no normalization (the default).
        - reg: Scalar giving L2 regularization strength.
        - weight_scale: Scalar giving the standard deviation for random
          initialization of the weights.
        - dtype: A numpy datatype object; all computations will be performed using
          this datatype. float32 is faster but less accurate, so you should use
          float64 for numeric gradient checking.
        - seed: If not None, then pass this random seed to the dropout layers. This
          will make the dropout layers deteriminstic so we can gradient check the
          model.
        """
        self.normalization = normalization
        self.use_dropout = dropout != 1
        self.reg = reg
        self.num_layers = 1 + len(hidden_dims)
        self.dtype = dtype
        self.params = {
    }

        ############################################################################
        # TODO: Initialize the parameters of the network, storing all values in    #
        # the self.params dictionary. Store weights and biases for the first layer #
        # in W1 and b1; for the second layer use W2 and b2, etc. Weights should be #
        # initialized from a normal distribution centered at 0 with standard       #
        # deviation equal to weight_scale. Biases should be initialized to zero.   #
        #                                                                          #
        # When using batch normalization, store scale and shift parameters for the #
        # first layer in gamma1 and beta1; for the second layer use gamma2 and     #
        # beta2, etc. Scale parameters should be initialized to ones and shift     #
        # parameters should be initialized to zeros.                               #
        ############################################################################
        parameters = [input_dim] + hidden_dims + [num_classes]
        lx = len(parameters)
        for i in range(1,lx):
            idw = 'W' + str(i)
            idb = 'b' + str(i)

            self.params[idw] = np.random.randn(parameters[i-1], parameters[i]) * weight_scale
            self.params[idb] = np.zeros(parameters[i])
        if self.normalization=='batchnorm':
            for i in range(1,lx-1):
                idga = 'gamma' + str(i)
                idbe = 'beta' + str(i)
                self.params[idga] = np.ones(parameters[i])
                self.params[idbe] = np.zeros(parameters[i])


        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        # When using dropout we need to pass a dropout_param dictionary to each
        # dropout layer so that the layer knows the dropout probability and the mode
        # (train / test). You can pass the same dropout_param to each dropout layer.
        self.dropout_param = {
    }
        if self.use_dropout:
            self.dropout_param = {
    'mode': 'train', 'p': dropout}
            if seed is not None:
                self.dropout_param['seed'] = seed

        # With batch normalization we need to keep track of running means and
        # variances, so we need to pass a special bn_param object to each batch
        # normalization layer. You should pass self.bn_params[0] to the forward pass
        # of the first batch normalization layer, self.bn_params[1] to the forward
        # pass of the second batch normalization layer, etc.
        self.bn_params = []
        if self.normalization=='batchnorm':
            self.bn_params = [{
    'mode': 'train'} for i in range(self.num_layers - 1)]
        if self.normalization=='layernorm':
            self.bn_params = [{
    } for i in range(self.num_layers - 1)]

        # Cast all parameters to the correct datatype
        for k, v in self.params.items():
            self.params[k] = v.astype(dtype)


    def loss(self, X, y=None):
        """
        Compute loss and gradient for the fully-connected net.

        Input / output: Same as TwoLayerNet above.
        """
        X = X.astype(self.dtype)
        mode = 'test' if y is None else 'train'

        # Set train/test mode for batchnorm params and dropout param since they
        # behave differently during training and testing.
        if self.use_dropout:
            self.dropout_param['mode'] = mode
        if self.normalization=='batchnorm':
            for bn_param in self.bn_params:
                bn_param['mode'] = mode
        scores = None
        ############################################################################
        # TODO: Implement the forward pass for the fully-connected net, computing  #
        # the class scores for X and storing them in the scores variable.          #
        #                                                                          #
        # When using dropout, you'll need to pass self.dropout_param to each       #
        # dropout forward pass.                                                    #
        #                                                                          #
        # When using batch normalization, you'll need to pass self.bn_params[0] to #
        # the forward pass for the first batch normalization layer, pass           #
        # self.bn_params[1] to the forward pass for the second batch normalization #
        # layer, etc.                                                              #
        ############################################################################
        N, D = X.shape[0], np.prod(X.shape[1:])
        scores = X.reshape(N,D)
        list_x_in = {
    }
        list_x_in['x' + str(0)] = scores
        for i in range(1,self.num_layers):
            xa, wa, ba = scores, self.params['W' + str(i)], self.params['b' + str(i)]
            if self.normalization == 'batchnorm' and self.use_dropout:
                gamma, beta = self.params['gamma' + str(i)], self.params['beta' + str(i)]
                scores, cache = affine_bn_relu_drop_forward(xa, wa, ba, gamma, beta, self.bn_params[i - 1],self.dropout_param)
            elif self.normalization == 'batchnorm':
                gamma, beta = self.params['gamma' + str(i)], self.params['beta' + str(i)]
                scores, cache = affine_bn_relu_forward(xa, wa, ba, gamma, beta, self.bn_params[i-1])
            elif self.use_dropout:
                scores, cache = affine_relu_drop_forward(xa, wa, ba, self.dropout_param)
            else:
                scores, cache = affine_relu_forward(xa, wa, ba)
            list_x_in['x'+str(i)] = cache
        #注:最后一层没用relu
        xa, wa, ba = scores, self.params['W' + str(self.num_layers)], self.params['b' + str(self.num_layers)]
        scores, cache = affine_forward(xa, wa, ba)
        list_x_in['x' + str(self.num_layers)] = cache
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################


        # If test mode return early
        if mode == 'test':
            return scores

        loss, grads = 0.0, {
    }
        ############################################################################
        # TODO: Implement the backward pass for the fully-connected net. Store the #
        # loss in the loss variable and gradients in the grads dictionary. Compute #
        # data loss using softmax, and make sure that grads[k] holds the gradients #
        # for self.params[k]. Don't forget to add L2 regularization!               #
        #                                                                          #
        # When using batch/layer normalization, you don't need to regularize the scale   #
        # and shift parameters.                                                    #
        #                                                                          #
        # NOTE: To ensure that your implementation matches ours and you pass the   #
        # automated tests, make sure that your L2 regularization includes a factor #
        # of 0.5 to simplify the expression for the gradient.                      #
        ############################################################################

        loss, dout = softmax_loss(scores, y)
        for i in range(self.num_layers):
            loss = loss + 0.5*self.reg*np.sum(np.square(self.params['W'+str(i+1)]))

        cache = list_x_in['x' + str(self.num_layers)]
        dout, dw, db = affine_backward(dout, cache)
        grads['W' + str(self.num_layers)], grads['b' + str(self.num_layers)] = dw + self.reg * self.params['W' + str(self.num_layers)], db

        for id in range(self.num_layers-1,0,-1):
            cache = list_x_in['x'+str(id)]

            if self.normalization == 'batchnorm' and self.use_dropout:
                dout, dw, db, dgamma, dbeta = affine_bn_relu_drop_backward(dout, cache)
                grads['gamma' + str(id)], grads['beta' + str(id)] = dgamma, dbeta
            elif self.normalization == 'batchnorm':
                dout, dw, db, dgamma, dbeta = affine_bn_relu_backward(dout, cache)
                grads['gamma' + str(id)], grads['beta' + str(id)] = dgamma, dbeta
            elif self.use_dropout:
                # print("dout", dout.shape)
                dout, dw, db = affine_relu_drop_backward(dout, cache)
                # print("dout",dout.shape)
            else:
                dout, dw, db = affine_relu_backward(dout, cache)
            grads['W'+str(id)], grads['b'+str(id)] = dw+self.reg * self.params['W'+str(id)], db

        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        return loss, grads

optic.py

import numpy as np

"""
This file implements various first-order update rules that are commonly used
for training neural networks. Each update rule accepts current weights and the
gradient of the loss with respect to those weights and produces the next set of
weights. Each update rule has the same interface:

def update(w, dw, config=None):

Inputs:
  - w: A numpy array giving the current weights.
  - dw: A numpy array of the same shape as w giving the gradient of the
    loss with respect to w.
  - config: A dictionary containing hyperparameter values such as learning
    rate, momentum, etc. If the update rule requires caching values over many
    iterations, then config will also hold these cached values.

Returns:
  - next_w: The next point after the update.
  - config: The config dictionary to be passed to the next iteration of the
    update rule.

NOTE: For most update rules, the default learning rate will probably not
perform well; however the default values of the other hyperparameters should
work well for a variety of different problems.

For efficiency, update rules may perform in-place updates, mutating w and
setting next_w equal to w.
"""


def sgd(w, dw, config=None):
    """
    Performs vanilla stochastic gradient descent.

    config format:
    - learning_rate: Scalar learning rate.
    """
    if config is None: config = {
    }
    config.setdefault('learning_rate', 1e-2)

    w -= config['learning_rate'] * dw
    return w, config


def sgd_momentum(w, dw, config=None):
    """
    Performs stochastic gradient descent with momentum.

    config format:
    - learning_rate: Scalar learning rate.
    - momentum: Scalar between 0 and 1 giving the momentum value.
      Setting momentum = 0 reduces to sgd.
    - velocity: A numpy array of the same shape as w and dw used to store a
      moving average of the gradients.
    """
    if config is None: config = {
    }
    config.setdefault('learning_rate', 1e-2)
    config.setdefault('momentum', 0.9)
    v = config.get('velocity', np.zeros_like(w))

    next_w = None
    ###########################################################################
    # TODO: Implement the momentum update formula. Store the updated value in #
    # the next_w variable. You should also use and update the velocity v.     #
    ###########################################################################
    v = config['momentum'] * v - config['learning_rate']*dw
    next_w = w + v
    pass
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################
    config['velocity'] = v

    return next_w, config



def rmsprop(w, dw, config=None):
    """
    Uses the RMSProp update rule, which uses a moving average of squared
    gradient values to set adaptive per-parameter learning rates.

    config format:
    - learning_rate: Scalar learning rate.
    - decay_rate: Scalar between 0 and 1 giving the decay rate for the squared
      gradient cache.
    - epsilon: Small scalar used for smoothing to avoid dividing by zero.
    - cache: Moving average of second moments of gradients.
    """
    if config is None: config = {
    }
    config.setdefault('learning_rate', 1e-2)
    config.setdefault('decay_rate', 0.99)
    config.setdefault('epsilon', 1e-8)
    config.setdefault('cache', np.zeros_like(w))

    next_w = None
    ###########################################################################
    # TODO: Implement the RMSprop update formula, storing the next value of w #
    # in the next_w variable. Don't forget to update cache value stored in    #
    # config['cache'].                                                        #
    ###########################################################################
    cache = config['decay_rate']*config['cache']+(1-config['decay_rate'])*np.square(dw)
    next_w = w - config['learning_rate'] * dw / np.sqrt(cache+config['epsilon'])
    config['cache'] = cache
    pass
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################

    return next_w, config


def adam(w, dw, config=None):
    """
    Uses the Adam update rule, which incorporates moving averages of both the
    gradient and its square and a bias correction term.

    config format:
    - learning_rate: Scalar learning rate.
    - beta1: Decay rate for moving average of first moment of gradient.
    - beta2: Decay rate for moving average of second moment of gradient.
    - epsilon: Small scalar used for smoothing to avoid dividing by zero.
    - m: Moving average of gradient.
    - v: Moving average of squared gradient.
    - t: Iteration number.
    """
    if config is None: config = {
    }
    config.setdefault('learning_rate', 1e-3)
    config.setdefault('beta1', 0.9)
    config.setdefault('beta2', 0.999)
    config.setdefault('epsilon', 1e-8)
    config.setdefault('m', np.zeros_like(w))
    config.setdefault('v', np.zeros_like(w))
    config.setdefault('t', 0)

    next_w = None
    ###########################################################################
    # TODO: Implement the Adam update formula, storing the next value of w in #
    # the next_w variable. Don't forget to update the m, v, and t variables   #
    # stored in config.                                                       #
    #                                                                         #
    # NOTE: In order to match the reference output, please modify t _before_  #
    # using it in any calculations.                                           #
    ###########################################################################
    config['t'] = config['t'] + 1
    first_moment = config['beta1']*config['m'] + (1-config['beta1'])*dw
    second_moment = config['beta2']*config['v'] + (1-config['beta2'])*dw*dw
    first_moment_bias = first_moment / (1 - config['beta1']**config['t'])
    second_moment_bias = second_moment / (1 - config['beta2']**config['t'])
    next_w = w - config['learning_rate'] * first_moment_bias / (np.sqrt(second_moment_bias)+config['epsilon'])
    config['m'] = first_moment
    config['v'] = second_moment
    pass
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################

    return next_w, config

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/yjf3151731373/article/details/104539760

智能推荐

基于Python实现的电影数据可视化分析系统附完整代码_影视网站的数据可视化分析-程序员宅基地

文章浏览阅读4k次,点赞3次,收藏46次。基于Python实现的电影数据可视化分析系统附完整代码_影视网站的数据可视化分析

Qt入门:3 Qt界面布局管理详解_qt vertical line 移动-程序员宅基地

文章浏览阅读1.4k次,点赞2次,收藏8次。实例讲解ln_2双击dialog.ui进入设计界面,进行如下设计:程序的主要功能是对中间一个文本框的文字字体样式和颜色进行设置。在界面设计时,对需要访问的组件修改其 objectName,如各个按钮、需要读取输入的编辑框、需要显示结果的标签等,以便在程序里区分。对于不需要程序访问的组件则无需修改其 objectName,如用于界面上组件分组的 Gr..._qt vertical line 移动

CRC冗余校验码及查表法_crc4错误概率-程序员宅基地

文章浏览阅读987次。CRC冗余校验码及查表法什么是CRC编码它将一个长度为k的位串看作是系数是0或者1的k-1次多项式使用一个长度为r+1的生成多项式进行模2计算,生成一个长度为r的字符序列,能检测长度小于等于r的所有突发错误,当突发错误长度为r+1时,只有其刚刚好等于生成多项式,才检测不出来。多项式的最高位、最低位系数必须为1(我不知道为什么)计算方法:(此处使用的减法是模2减法,不进位不借位,相当于XOR运算)例如:使用G(x)=11001检测位串1011011010110110000011001----_crc4错误概率

STM32+ESP8266+DHT11通过MQTT协议连接新版ONENET云平台上传数据_新版onenet连接stm32-程序员宅基地

文章浏览阅读5.7k次,点赞9次,收藏119次。前段时间ONENET云平台进行了升级更新,此前平台的多协议接入(包含旧版MQTT、HTTP、EDP、Modbus、TCP透传等)接口已经隐藏,后续应该会下架,为了能够后续继续使用ONENET云平台,就需要学会使用将数据上传到新版ONENET云平台。经过一段时间的摸索,现在可以成功将数据上传。此次使用MQTT协议将温湿度通过ESP8266_WIFI模块上传到新版ONENET云平台,并使用app.wxbit.com图形化APP制作工具制作APP调用ONENET云平台提供的API接口实时显示温湿度数据。_新版onenet连接stm32

百度网盘直接解析高速下载文件源码_百度网盘解析-程序员宅基地

文章浏览阅读5.6k次。介绍:百度网盘直接高速下载文件源码上传源码 访问域名跳转安装页面填写相关信息 安装完成源码功能:通过curl获取网盘文件信息,处理后显示在网页中。通过api接口以及SVIP账号的Cookie(BDUSS)获取高速下载链接。网盘下载地址:http://kekewl.cc/yk09TgCFisr0图片:..._百度网盘解析

python采集人脸_Python+Dlib+Opencv实现人脸采集并表情判别功能的代码-程序员宅基地

文章浏览阅读525次。一、dlib以及opencv-python库安装介于我使用的是jupyter notebook,所以在安装dlib和opencv-python时是在这个命令行安装的dlib安装方法:1.若可以,直接使用上图所示命令行输入以下命令:pip install cmakepip install boostpip install dlib若安装了visual studio2019应该就可以直接pip ins..._python 采集人脸图像的代码

随便推点

2020年中南大学研究生招生夏令营机试题_中南大学 计算机 夏令营 笔试-程序员宅基地

文章浏览阅读804次。2020年中南大学研究生招生夏令营机试题题目链接A题题目描述众所周知,彩虹有7种颜色,我们给定七个 字母和颜色 的映射,如下所示:‘A’ -> “red”‘B’ -> “orange”‘C’ -> “yellow”‘D’ -> “green”‘E’ -> “cyan”‘F’ -> “blue”‘G’ -> “purple”但是在某一..._中南大学 计算机 夏令营 笔试

Cmake的option与cmake_dependent_option-程序员宅基地

文章浏览阅读2.9k次。一、介绍cmake提供了一组内置宏,用户可以自己设置。只有当该集合中的其他条件为真时,该宏才会向用户提供一个选项。语法include(CMakeDependentOption)CMAKE_DEPENDENT_OPTION(USE_FOO "Use Foo" ON "USE_BAR;NOT USE_ZOT" OFF)如果USE_BAR为true而USE_ZOT为false,则提供一个默认为ON的选项USE_FOO。否则,它将USE_FOO设._cmake_dependent_option

C++ =default-程序员宅基地

文章浏览阅读5.2k次,点赞10次,收藏34次。在c++中如果我们自行定义了一个构造函数,那么编译器就不会再次生成默认构造函数,我们先看如下的代码我们定义一个类,这个类没有定义构造函数,此时在下面一段代码依然可以正常使用,我们加上一个自定义构造函数:此时编译器会报错,原因很简单,我们自定义了一个构造函数,以前的默认构造函数没了,我们要用如下的方式调用:如果我们还要使用无参构造函数得在定义时自己写个好了此时不报错了,但是这样写代码执行效率没有编译器生成的自定义函数的效率高,为了解决这个问题,C++11 标准引入了一个新特性:default 函数。程_c++ =default

linux gcc 4.8.2,gcc4.8.2 编译异常-程序员宅基地

文章浏览阅读446次。gcc4.8.2 编译错误 求助checkingfor--enable-version-specific-runtime-libs...nocheckingfor--enable-generated-files-in-srcdir...nocheckingbuildsystemtype...i686-pc-linux-gnucheckinghostsystemtype....._gcc libatomic error

IOS企业版app部署到自己服务器,不通过AppStore,在iOS设备上直接安装应用程序_plist中内嵌的下载地址 带参-程序员宅基地

文章浏览阅读2.7w次。IOS企业版app部署到服务器上说明 正对ios升级得ios7 以后,plist文件必须放到 https得服务器上了,http不可以用了。 解决方式: 找一个第三方https外链的网盘(推荐:七牛云存储https://portal.qiniu.com/),将plist文件放到网盘上,ipa安装包可以放在 自己的服务器上。不通过在AppStore,在IOS设备上直接安_plist中内嵌的下载地址 带参

2024最新UI千月影视APP源码前后端源码搭建教程_影视源码-程序员宅基地

文章浏览阅读967次。3.修改前端 config 文件内的id=”A6168029539364″ 改为你apicloud显示的。1.全部替换 id.omgk.cn为 你的域名 全部替换 千彩视频 为你的名字。3.设置thinkphp伪静态 跨站日志全部关闭 php版本5.6+2.导入数据库 全部替换id.omgk.cn 为 你的域名。4.后台地址:login/login/mike。1.进数据库文件内修改数据库名称用户名密码等。账号:admin密码:admin。要源码的请自行百度:一生相随博客。_影视源码